[Yahoo-eng-team] [Bug 1548649] [NEW] Hypervisors on dashboard shows local disk usage which also counts in volume usage

2016-02-22 Thread Zhe Jiang
Public bug reported:

Our VMs we created are all use volumes on storage. However from
hypervisor it shows "Local Disk Usage Used 340GB of 152GB" which is
incorrect for logic. Local disk usage should not count in volume usage.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1548649

Title:
  Hypervisors on dashboard shows local disk usage which also counts in
  volume usage

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Our VMs we created are all use volumes on storage. However from
  hypervisor it shows "Local Disk Usage Used 340GB of 152GB" which is
  incorrect for logic. Local disk usage should not count in volume
  usage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1548649/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548635] [NEW] devstack install get error because miss a dir

2016-02-22 Thread zhaozhilong
Public bug reported:

1). when i use devstack to install horizon
Traceback (most recent call last):
  File "/bin/django-admin", line 11, in 
sys.exit(execute_from_command_line())
  File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", 
line 354, in execute_from_command_line
utility.execute()
  File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", 
line 346, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/usr/lib/python2.7/site-packages/django/core/management/base.py", line 
394, in run_from_argv
self.execute(*args, **cmd_options)
  File "/usr/lib/python2.7/site-packages/django/core/management/base.py", line 
445, in execute
output = self.handle(*args, **options)
  File 
"/usr/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py",
 line 168, in handle
collected = self.collect()
  File 
"/usr/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py",
 line 98, in collect
for path, storage in finder.list(self.ignore_patterns):
  File 
"/usr/lib/python2.7/site-packages/django/contrib/staticfiles/finders.py", line 
112, in list
for path in utils.get_files(storage, ignore_patterns):
  File "/usr/lib/python2.7/site-packages/django/contrib/staticfiles/utils.py", 
line 28, in get_files
directories, files = storage.listdir(location)
  File "/usr/lib/python2.7/site-packages/django/core/files/storage.py", line 
299, in listdir
for entry in os.listdir(path):
OSError: [Errno 2] No such file or directory: 
'/opt/stack/horizon/openstack_dashboard/themes/webroot'

** Affects: horizon
 Importance: Undecided
 Assignee: zhaozhilong (zhaozhilong)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => zhaozhilong (zhaozhilong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1548635

Title:
  devstack install get error because miss a dir

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  1). when i use devstack to install horizon
  Traceback (most recent call last):
File "/bin/django-admin", line 11, in 
  sys.exit(execute_from_command_line())
File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", 
line 354, in execute_from_command_line
  utility.execute()
File "/usr/lib/python2.7/site-packages/django/core/management/__init__.py", 
line 346, in execute
  self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python2.7/site-packages/django/core/management/base.py", 
line 394, in run_from_argv
  self.execute(*args, **cmd_options)
File "/usr/lib/python2.7/site-packages/django/core/management/base.py", 
line 445, in execute
  output = self.handle(*args, **options)
File 
"/usr/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py",
 line 168, in handle
  collected = self.collect()
File 
"/usr/lib/python2.7/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py",
 line 98, in collect
  for path, storage in finder.list(self.ignore_patterns):
File 
"/usr/lib/python2.7/site-packages/django/contrib/staticfiles/finders.py", line 
112, in list
  for path in utils.get_files(storage, ignore_patterns):
File 
"/usr/lib/python2.7/site-packages/django/contrib/staticfiles/utils.py", line 
28, in get_files
  directories, files = storage.listdir(location)
File "/usr/lib/python2.7/site-packages/django/core/files/storage.py", line 
299, in listdir
  for entry in os.listdir(path):
  OSError: [Errno 2] No such file or directory: 
'/opt/stack/horizon/openstack_dashboard/themes/webroot'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1548635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548633] [NEW] One of the Network node out of five is over utilized (has more dhcp namespaces) than the others.

2016-02-22 Thread Vishal Agarwal
Public bug reported:

I have 5 Network nodes (NN's) on my setup each with 32 GB RAM. All have
same configuration (Created from the same ubuntu template).

I am running a scale scenario under which I am creating 4K networks each
with one subnet using rally with 100 concurrency. Ideally all the
network namespaces should have been equally divided among all the 5 NN’s
but one NN is over utilized which in turn creates resource crunch and
future requests start failing on it.

The number of namespace on the faulty NN is 1175 while other NN's have
650 to 750 namespaces. I ran the scenario twice and both the times the
result was same.

Please note if I create networks one by one without any concurrency the
namespace distribution is even and the problem is not seen.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548633

Title:
  One of the Network node out of five is over utilized (has more dhcp
  namespaces) than the others.

Status in neutron:
  New

Bug description:
  I have 5 Network nodes (NN's) on my setup each with 32 GB RAM. All
  have same configuration (Created from the same ubuntu template).

  I am running a scale scenario under which I am creating 4K networks
  each with one subnet using rally with 100 concurrency. Ideally all the
  network namespaces should have been equally divided among all the 5
  NN’s but one NN is over utilized which in turn creates resource crunch
  and future requests start failing on it.

  The number of namespace on the faulty NN is 1175 while other NN's have
  650 to 750 namespaces. I ran the scenario twice and both the times the
  result was same.

  Please note if I create networks one by one without any concurrency
  the namespace distribution is even and the problem is not seen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440773] Re: Remove WritableLogger as eventlet has a real logger interface in 0.17.2

2016-02-22 Thread Steve Martinelli
keystone patch: https://review.openstack.org/#/c/283078/1

** Also affects: keystone
   Importance: Undecided
   Status: New

** Changed in: keystone
   Status: New => In Progress

** Changed in: keystone
 Assignee: (unassigned) => Chaozhe Chen (chaozhe-chen)

** Changed in: keystone
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440773

Title:
  Remove WritableLogger as eventlet has a real logger interface in
  0.17.2

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  In Progress
Status in OpenStack Identity (keystone):
  In Progress
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.log:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  Info from Sean on IRC:

  the patch to use a real logger interface in eventlet has been released
  in 0.17.2, which means we should be able to phase out
  https://github.com/openstack/oslo.log/blob/master/oslo_log/loggers.py

  Eventlet PR was:
  https://github.com/eventlet/eventlet/pull/75

  thanks,
  dims

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1440773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548604] [NEW] Can not modify default settings of lbaas haproxy template

2016-02-22 Thread jhsea3do
Public bug reported:

I've changed haproxy base template jinja  file, set value of "timeout
connect" option in  "defaults" entry from 5000 to 4000.

the file is located at 
/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/templates/haproxy_base.j2.

'''
. 
defaults
log global
retries 3
option redispatch
timeout connect 4000
timeout client 5
timeout server 5
.
'''

then i restarted neutron-server service and neutron-lbaas-agent service.

then  i submit a new lbaas ceate job, it generated haproxy config file,
/var/lib/neutron/lbaas/2a320b6d-bc86-4304-ab89-98438377ac83/conf,

and the  "timeout connect" value still show 5000.

'''
. 
defaults
log global
retries 3
option redispatch
timeout connect 5000
timeout client 5
timeout server 5
.
'''

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: haproxy jinja neutron-lbaas template

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548604

Title:
  Can not modify default settings of lbaas haproxy template

Status in neutron:
  New

Bug description:
  I've changed haproxy base template jinja  file, set value of "timeout
  connect" option in  "defaults" entry from 5000 to 4000.

  the file is located at 
  
/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/templates/haproxy_base.j2.

  '''
  . 
  defaults
  log global
  retries 3
  option redispatch
  timeout connect 4000
  timeout client 5
  timeout server 5
  .
  '''

  then i restarted neutron-server service and neutron-lbaas-agent
  service.

  then  i submit a new lbaas ceate job, it generated haproxy config
  file,  /var/lib/neutron/lbaas/2a320b6d-
  bc86-4304-ab89-98438377ac83/conf,

  and the  "timeout connect" value still show 5000.

  '''
  . 
  defaults
log global
retries 3
option redispatch
timeout connect 5000
timeout client 5
timeout server 5
  .
  '''

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546793] Re: Fix neutron-fwaas cover tests

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/281567
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=18ea965fa2026992ddee5e4568bf7d362de89d8a
Submitter: Jenkins
Branch:master

commit 18ea965fa2026992ddee5e4568bf7d362de89d8a
Author: James Arendt 
Date:   Fri Feb 12 00:22:30 2016 -0800

Fix neutron-fwaas cover tests

The tox.ini command for 'tox -e cover' breaks with error:
error: option --coverage-package-name not recognized

Appears to be same issue as found in neutron-vpnaas and fixed
there under https://review.openstack.org/#/c/217847/

Applying same fix to neutron-fwaas.

Same logic applies to 'tox -e cover-constraints', though
because upstream changed 'upper-constraints.txt' eventlet
to 0.18.3 in middle had to set to local stack value to run:
export UPPER_CONSTRAINTS_FILE=/opt/stack/requirements/
upper-constraints.txt

Closes-Bug: #1546793
Change-Id: I82fde90d1ed17f2495f8560b8e56febc10e3013c


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546793

Title:
  Fix neutron-fwaas cover tests

Status in neutron:
  Fix Released

Bug description:
  The tox.ini command for 'tox -e cover' breaks with error:
  cover runtests: commands[0] | python setup.py testr --coverage 
--coverage-package-name=neutron_fwaas --testr-args=
  usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
 or: setup.py --help [cmd1 cmd2 ...]
 or: setup.py --help-commands
 or: setup.py cmd --help

  error: option --coverage-package-name not recognized
  ERROR: InvocationError: '/opt/stack/neutron-fwaas/.tox/cover/bin/python 
setup.py testr --coverage --coverage-package-name=neutron_fwaas --testr-args='
  ___ summary 

  ERROR:   cover: commands failed

  Appears to be same issue as found in neutron-vpnaas and fixed there
  under https://review.openstack.org/#/c/217847/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539354] Re: django.conf.urls.patterns is deprecated since django 1.8

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/274049
Committed: 
https://git.openstack.org/cgit/openstack/senlin-dashboard/commit/?id=676b5713133ae047f16e3a6041990b2fb70ea015
Submitter: Jenkins
Branch:master

commit 676b5713133ae047f16e3a6041990b2fb70ea015
Author: shu-mutou 
Date:   Fri Jan 29 19:01:45 2016 +0900

Update URLs to Django 1.8 style

django.conf.urls.patterns() is deprecated since 1.8.
We should not use patterns(), so this patch updates URLs to
1.8 style.

Change-Id: I4192217356aa45ca7ba545985380820c5960382d
Closes-Bug: #1539354


** Changed in: senlin-dashboard
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1539354

Title:
  django.conf.urls.patterns is deprecated since django 1.8

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Magnum UI:
  In Progress
Status in senlin-dashboard:
  Fix Released

Bug description:
  We should not use django.conf.urls.patterns method in urls.py.

  https://docs.djangoproject.com/en/1.9/ref/urls/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1539354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518296] Re: Non snated packet should be blocked

2016-02-22 Thread Kevin Benton
** Changed in: neutron
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518296

Title:
  Non snated packet should be blocked

Status in neutron:
  New

Bug description:
  In current neutron, when running "neutron router-gateway-set" with
  specified router's "enable_snat" is false, then non-SNAT'ed packets
  can arrive at other tenant via external-network.  The packets don't
  pass through other tenant's gateway, but take extra load to external
  network.

  The packet should be NAT'ed when flowing on external network.  Non-
  SNAT'ed packets don't need to flow on external network.

  Therefore, non-SNAT'ed packets should be dropped at inside of own
  tenant.

  I will fix as follows:

    * The router is Legacy mode and enable_snat is True:
  No change from current implementation.

    * The router is Legacy mode and enable_snat is False:
  Add new rule for dropping outbound non-SNAT'ed packets.

    * The router is DVR mode and enable_snat is True:
  No change from current implementation.

    * The router is Legacy mode and enable_snat is False:
  Don't create SNAT name space.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518296/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518296] Re: Non snated packet should be blocked

2016-02-22 Thread Kevin Benton
If you have SNAT disabled and don't want traffic to flow onto the
external network, why would you attach an interface to the external
network in the first place?

** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518296

Title:
  Non snated packet should be blocked

Status in neutron:
  Opinion

Bug description:
  In current neutron, when running "neutron router-gateway-set" with
  specified router's "enable_snat" is false, then non-SNAT'ed packets
  can arrive at other tenant via external-network.  The packets don't
  pass through other tenant's gateway, but take extra load to external
  network.

  The packet should be NAT'ed when flowing on external network.  Non-
  SNAT'ed packets don't need to flow on external network.

  Therefore, non-SNAT'ed packets should be dropped at inside of own
  tenant.

  I will fix as follows:

    * The router is Legacy mode and enable_snat is True:
  No change from current implementation.

    * The router is Legacy mode and enable_snat is False:
  Add new rule for dropping outbound non-SNAT'ed packets.

    * The router is DVR mode and enable_snat is True:
  No change from current implementation.

    * The router is Legacy mode and enable_snat is False:
  Don't create SNAT name space.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518296/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280105] Re: urllib/urllib2 is incompatible for python 3

2016-02-22 Thread Kirill Zaitsev
** No longer affects: murano

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1280105

Title:
  urllib/urllib2  is incompatible for python 3

Status in Ceilometer:
  Fix Released
Status in Cinder:
  In Progress
Status in Fuel for OpenStack:
  Fix Committed
Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  In Progress
Status in neutron:
  Fix Released
Status in python-troveclient:
  In Progress
Status in refstack:
  Fix Released
Status in Sahara:
  Fix Released
Status in tacker:
  In Progress
Status in tempest:
  In Progress
Status in Trove:
  In Progress
Status in Zuul:
  In Progress

Bug description:
  urllib/urllib2  is incompatible for python 3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1280105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548562] [NEW] Misleading response when try to delete project with children

2016-02-22 Thread Brant Knudson
Public bug reported:


When attempting to delete a project that has a child, the operation
is rejected as expected, but the message says it was rejected
because of an authority problem:

 You are not authorized to perform the requested action: cannot
 delete the project ... since it is not a leaf in the hierarchy.

This is misleading since the problem has nothing to do with
authority and granting more authority isn't going to allow the
operation to work.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1548562

Title:
  Misleading response when try to delete project with children

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  
  When attempting to delete a project that has a child, the operation
  is rejected as expected, but the message says it was rejected
  because of an authority problem:

   You are not authorized to perform the requested action: cannot
   delete the project ... since it is not a leaf in the hierarchy.

  This is misleading since the problem has nothing to do with
  authority and granting more authority isn't going to allow the
  operation to work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1548562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548547] [NEW] Functional tests failing with FAIL: process-returncode

2016-02-22 Thread Assaf Muller
Public bug reported:

After https://review.openstack.org/#/c/277813/, we started seeing
failures in the functional job. The root cause is that the patch is
using self.addOnException, and it looks like if the method that is
invoked on exception itself raises an exception, testr freaks out and
fails the test. I think that in this particular case, the method
(collect_debug_info) may be executed out of order, after test clean ups
already occur so namespaces fixtures are already cleaned up.

Example trace:
http://paste.openstack.org/show/487818/

Failure rate seems to be hovering around 80% in the last couple of days.

** Affects: neutron
 Importance: Critical
 Assignee: Assaf Muller (amuller)
 Status: Confirmed


** Tags: functional-tests gate-failure

** Changed in: neutron
 Assignee: (unassigned) => Assaf Muller (amuller)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548547

Title:
  Functional tests failing with FAIL: process-returncode

Status in neutron:
  Confirmed

Bug description:
  After https://review.openstack.org/#/c/277813/, we started seeing
  failures in the functional job. The root cause is that the patch is
  using self.addOnException, and it looks like if the method that is
  invoked on exception itself raises an exception, testr freaks out and
  fails the test. I think that in this particular case, the method
  (collect_debug_info) may be executed out of order, after test clean
  ups already occur so namespaces fixtures are already cleaned up.

  Example trace:
  http://paste.openstack.org/show/487818/

  Failure rate seems to be hovering around 80% in the last couple of
  days.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548545] [NEW] Terminating process, that starts API workers does not terminate the workers

2016-02-22 Thread Kirill Zaitsev
Public bug reported:

Experiencing an issue with master, my HEAD points to
7e27d6ef05ec856e7b62f65218bdef3896ac55a2

Here is full log of glance-api

$ tox -e venv -- glance-api 
--config-file=/home/teferi/glance/etc/glance-api.conf
venv develop-inst-noop: /home/teferi/glance
venv installed: aioeventlet==0.4,alembic==0.8.4,amqp==1.4.9,anyjson==0.3.3,-e 
git+https://git.openstack.org/openstack/app-catalog@2ea8f13555a07ac3b32abcf6b5ef40c4012bb5f2#egg=app_catalog_artifact_plugin=contrib/glare,appdirs==1.4.0,automaton==1.2.0,Babel==2.2.0,cachetools==1.1.5,castellan==0.3.1,cffi==1.5.2,contextlib2==0.5.1,coverage==4.0.3,cryptography==1.2.2,debtcollector==1.3.0,decorator==4.0.9,docutils==0.12,enum34==1.1.2,eventlet==0.18.4,extras==0.0.3,fasteners==0.14.1,fixtures==1.4.0,flake8==2.2.4,funcsigs==0.4,functools32==3.2.3.post2,futures==3.0.5,futurist==0.12.0,-e
 
git+https://git.openstack.org/openstack/glance@7e27d6ef05ec856e7b62f65218bdef3896ac55a2#egg=glance,glance-store==0.11.0,greenlet==0.4.9,hacking==0.10.2,httplib2==0.9.2,idna==2.0,ipaddress==1.0.16,iso8601==0.1.11,Jinja2==2.8,jsonschema==2.5.1,keystoneauth1==2.3.0,keystonemiddleware==4.3.0,kombu==3.0.33,linecache2==1.0.0,Mako==1.0.3,MarkupSafe==0.23,mccabe==0.2.1,mock==1.3.0,monotonic==0.6,mox3==0.14
 
.0,msgpack-python==0.4.7,netaddr==0.7.18,netifaces==0.10.4,networkx==1.11,os-client-config==1.15.0,oslo.concurrency==3.4.0,oslo.config==3.7.0,oslo.context==2.0.0,oslo.db==4.5.0,oslo.i18n==3.3.0,oslo.log==3.0.0,oslo.messaging==4.3.0,oslo.middleware==3.6.0,oslo.policy==1.4.0,oslo.serialization==2.3.0,oslo.service==1.5.0,oslo.utils==3.6.0,oslosphinx==4.3.0,oslotest==2.1.0,osprofiler==1.1.0,Paste==2.0.2,PasteDeploy==1.5.2,pbr==1.8.1,pep8==1.5.7,pika==0.10.0,pika-pool==0.1.3,positional==1.0.1,prettytable==0.7.2,psutil==1.2.1,psycopg2==2.6.1,pyasn1==0.1.9,pycadf==2.1.0,pycparser==2.14,pycrypto==2.6.1,pyflakes==0.8.1,Pygments==2.1.1,pyinotify==0.9.6,PyMySQL==0.7.1,pyOpenSSL==0.15.1,pyrsistent==0.11.12,pysendfile==2.0.1,python-dateutil==2.4.2,python-editor==0.5,python-keystoneclient==2.2.0,python-mimeparse==1.5.1,python-subunit==1.2.0,python-swiftclient==2.7.0,pytz==2015.7,PyYAML==3.11,qpid-python==0.32,reno==1.5.0,repoze.lru==0.6,requests==2.9.1,requestsexceptions==1.1.3,retrying==1.3.3,Ro
 
utes==2.2,semantic-version==2.5.0,simplegeneric==0.8.1,six==1.10.0,Sphinx==1.2.3,SQLAlchemy==1.0.12,sqlalchemy-migrate==0.10.0,sqlparse==0.1.18,stevedore==1.11.0,taskflow==1.28.0,Tempita==0.5.2,testrepository==0.0.20,testresources==1.0.0,testscenarios==0.5.0,testtools==2.0.0,traceback2==1.4.0,trollius==2.1,unittest2==1.1.0,WebOb==1.5.1,wheel==0.29.0,wrapt==1.10.6,WSME==0.8.0,xattr==0.7.9
venv runtests: PYTHONHASHSEED='1343193181'
venv runtests: commands[0] | glance-api 
--config-file=/home/teferi/glance/etc/glance-api.conf
/home/teferi/glance/.tox/venv/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.
  return pkg_resources.EntryPoint.parse("x=" + s).load(False)
/home/teferi/glance/.tox/venv/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.
  return pkg_resources.EntryPoint.parse("x=" + s).load(False)
/home/teferi/glance/.tox/venv/local/lib/python2.7/site-packages/oslo_middleware/ssl.py:28:
 DeprecationWarning: The 'oslo_middleware.ssl' module usage is deprecated, 
please use oslo_middleware.http_proxy_to_wsgi instead
  "oslo_middleware.http_proxy_to_wsgi")
/home/teferi/glance/.tox/venv/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.
  return pkg_resources.EntryPoint.parse("x=" + s).load(False)
/home/teferi/glance/.tox/venv/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.
  return pkg_resources.EntryPoint.parse("x=" + s).load(False)
/home/teferi/glance/.tox/venv/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.
  return pkg_resources.EntryPoint.parse("x=" + s).load(False)
/home/teferi/glance/.tox/venv/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.
  return pkg_resources.EntryPoint.parse("x=" + s).load(False)
/home/teferi/glance/.tox/venv/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22:
 DeprecationWarning: Parameters to load are deprecated.  Call .resolve and 
.require separately.
  return pkg_resources.EntryPoint.parse("x=" + s).load(False)
/home/teferi/glance/.tox/venv/local/lib/python2.7/site-packages/cryptography/x509/__init__.py:32:
 PendingDeprecationWarning: CRLExtensionOID has been renamed to 

[Yahoo-eng-team] [Bug 1532562] Re: Cell capacities updates include available resources of compute nodes "down"

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/265651
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=2d03bb97a309341c5a2bcc978220cd5af5f32179
Submitter: Jenkins
Branch:master

commit 2d03bb97a309341c5a2bcc978220cd5af5f32179
Author: Belmiro Moreira 
Date:   Sun Jan 10 16:51:06 2016 +0100

Fix cell capacity when compute nodes are down

Available resources from compute nodes that are not sending
service heartbeats (not alive) should not be considered in cell
capacity updates.

Closes Bug: #1532562

Change-Id: I0a456053d9c5e5fba39eb92f4820003e86d7a205


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1532562

Title:
  Cell capacities updates include available resources of compute nodes
  "down"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If a child cell has compute nodes without a heartbeat update 
  but enabled (XXX state with "nova-manage service list") the child cell 
continues to consider the available resources of these compute nodes 
  when updating the cell capacity.
  This can be problematic when having several cells and trying to fill them 
completely.
  Requests are sent to the cell that can fit more instances of the requested 
type however when compute nodes are "down" the requests will fail with "No 
valid host" in the cell.

  When updating the cell capacity the "disabled" compute nodes are
  excluded. This should also happen if the compute node didn't have a
  heartbeat update during the "CONF.service_down_time".

  How to reproduce:
  1) Have a cell environment with 2 child cells (A and B).
  2) Have nova-cells running in "debug". Confirm that the "Received capacities 
from child cell" A and B (in top nova-cell log) matches the number of available 
resources.
  4) Stop some compute nodes in cell A.
  5) Confirm that the "Received capacities from child cell A" don't change.
  6) Cell scheduler can send requests to cell A that can fail with "No valid 
host".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1532562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548531] [NEW] remove keypair warning text is insufficient

2016-02-22 Thread Eric Peterson
Public bug reported:

We have users remove keypairs a little too freely.  We would like to
improve the error message / warning text for users.

** Affects: horizon
 Importance: Undecided
 Assignee: Eric Peterson (ericpeterson-l)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Eric Peterson (ericpeterson-l)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1548531

Title:
  remove keypair warning text is insufficient

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  We have users remove keypairs a little too freely.  We would like to
  improve the error message / warning text for users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1548531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548520] [NEW] FWaaS: .tox/dsvm-functional/bin/subunit-1to2: No such file or directory

2016-02-22 Thread Madhusudhan Kandadai
Public bug reported:

when running dsvm-functional tests at gate, it throws an error with the
following:

2016-02-20 01:24:16.504 | 2016-02-20 01:24:16.485 | 
--
  ---
2016-02-20 01:24:16.539 | 2016-02-20 01:24:16.519 | 
neutron_fwaas.tests.functional.test_fwaas_driver.TestFWaaSDriver.test_status_reporting
  0.170
2016-02-20 01:24:16.547 | 2016-02-20 01:24:16.522 | 
___ summary 
2016-02-20 01:24:16.549 | 2016-02-20 01:24:16.530 |   dsvm-functional: commands 
succeeded
2016-02-20 01:24:16.555 | 2016-02-20 01:24:16.537 |   congratulations :)
2016-02-20 01:24:16.562 | 2016-02-20 01:24:16.543 | + testr_exit_code=0
2016-02-20 01:24:16.565 | 2016-02-20 01:24:16.546 | + set -e
2016-02-20 01:24:16.566 | 2016-02-20 01:24:16.548 | + generate_testr_results
2016-02-20 01:24:16.596 | 2016-02-20 01:24:16.550 | + sudo -H -u stack chmod 
o+rw .
2016-02-20 01:24:16.596 | 2016-02-20 01:24:16.551 | + sudo -H -u stack chmod 
o+rw -R .testrepository
2016-02-20 01:24:16.597 | 2016-02-20 01:24:16.553 | + '[' -f .testrepository/0 
']'
2016-02-20 01:24:16.597 | 2016-02-20 01:24:16.560 | + 
.tox/dsvm-functional/bin/subunit-1to2
2016-02-20 01:24:16.597 | + return 1
2016-02-20 01:24:16.597 | 2016-02-20 01:24:16.563 | 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/post_test_hook.sh: 
line 14: .tox/dsvm-functional/bin/subunit-1to2: No such file or directory
2016-02-20 01:24:16.597 | + local ret_val=1
2016-02-20 01:24:16.597 | + sudo mv 
/home/jenkins/workspace/gate-neutron-fwaas-dsvm-api/devstack-gate-post_test_hook.txt
 /opt/stack/logs/


Set the 'env' correctly in tox.ini to make it work.

** Affects: neutron
 Importance: Undecided
 Assignee: Madhusudhan Kandadai (madhusudhan-kandadai)
 Status: In Progress


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548520

Title:
  FWaaS:  .tox/dsvm-functional/bin/subunit-1to2: No such file or
  directory

Status in neutron:
  In Progress

Bug description:
  when running dsvm-functional tests at gate, it throws an error with
  the following:

  2016-02-20 01:24:16.504 | 2016-02-20 01:24:16.485 | 
--
  ---
  2016-02-20 01:24:16.539 | 2016-02-20 01:24:16.519 | 
neutron_fwaas.tests.functional.test_fwaas_driver.TestFWaaSDriver.test_status_reporting
  0.170
  2016-02-20 01:24:16.547 | 2016-02-20 01:24:16.522 | 
___ summary 
  2016-02-20 01:24:16.549 | 2016-02-20 01:24:16.530 |   dsvm-functional: 
commands succeeded
  2016-02-20 01:24:16.555 | 2016-02-20 01:24:16.537 |   congratulations :)
  2016-02-20 01:24:16.562 | 2016-02-20 01:24:16.543 | + testr_exit_code=0
  2016-02-20 01:24:16.565 | 2016-02-20 01:24:16.546 | + set -e
  2016-02-20 01:24:16.566 | 2016-02-20 01:24:16.548 | + generate_testr_results
  2016-02-20 01:24:16.596 | 2016-02-20 01:24:16.550 | + sudo -H -u stack chmod 
o+rw .
  2016-02-20 01:24:16.596 | 2016-02-20 01:24:16.551 | + sudo -H -u stack chmod 
o+rw -R .testrepository
  2016-02-20 01:24:16.597 | 2016-02-20 01:24:16.553 | + '[' -f 
.testrepository/0 ']'
  2016-02-20 01:24:16.597 | 2016-02-20 01:24:16.560 | + 
.tox/dsvm-functional/bin/subunit-1to2
  2016-02-20 01:24:16.597 | + return 1
  2016-02-20 01:24:16.597 | 2016-02-20 01:24:16.563 | 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/post_test_hook.sh: 
line 14: .tox/dsvm-functional/bin/subunit-1to2: No such file or directory
  2016-02-20 01:24:16.597 | + local ret_val=1
  2016-02-20 01:24:16.597 | + sudo mv 
/home/jenkins/workspace/gate-neutron-fwaas-dsvm-api/devstack-gate-post_test_hook.txt
 /opt/stack/logs/

  
  Set the 'env' correctly in tox.ini to make it work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548511] [NEW] Shared pools support

2016-02-22 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/218560
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.

commit 4f3cf154829fcd69ecf3fa7f4e49f82d0104a4f0
Author: Stephen Balukoff 
Date:   Sat Aug 29 02:51:09 2015 -0700

Shared pools support

In preparation for L7 switching functionality, we need to
reduce the rigidity of our model somewhat and allow pools
to exist independent of listeners and be shared by 0 or
more listeners. With this patch, pools are now associated
with loadbalancers directly, and there is now a N:M
relationship between listeners and pools.

This patch does alter the Neutron LBaaS v2 API slightly,
but all these changes are backward compatible. Nevertheless,
since Neutron core dev team has asked that any API changes
take place in an extension, that is what is being done in
this patch.

This patch also updates the reference namespace driver to
render haproxy config templates correctly given the pool
sharing functionality added with the patch.

Finally, the nature of shared pools means that the usual
workflow for tenants can be (but doesn't have to be)
altered such that pools can be created before listeners
independently, and assigned to listeners as a later step.

DocImpact
APIImpact
Partially-Implements: blueprint lbaas-l7-rules
Change-Id: Ia0974b01f1f02771dda545c4cfb5ff428a9327b4

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron-lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548511

Title:
  Shared pools support

Status in neutron:
  New

Bug description:
  https://review.openstack.org/218560
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.

  commit 4f3cf154829fcd69ecf3fa7f4e49f82d0104a4f0
  Author: Stephen Balukoff 
  Date:   Sat Aug 29 02:51:09 2015 -0700

  Shared pools support
  
  In preparation for L7 switching functionality, we need to
  reduce the rigidity of our model somewhat and allow pools
  to exist independent of listeners and be shared by 0 or
  more listeners. With this patch, pools are now associated
  with loadbalancers directly, and there is now a N:M
  relationship between listeners and pools.
  
  This patch does alter the Neutron LBaaS v2 API slightly,
  but all these changes are backward compatible. Nevertheless,
  since Neutron core dev team has asked that any API changes
  take place in an extension, that is what is being done in
  this patch.
  
  This patch also updates the reference namespace driver to
  render haproxy config templates correctly given the pool
  sharing functionality added with the patch.
  
  Finally, the nature of shared pools means that the usual
  workflow for tenants can be (but doesn't have to be)
  altered such that pools can be created before listeners
  independently, and assigned to listeners as a later step.
  
  DocImpact
  APIImpact
  Partially-Implements: blueprint lbaas-l7-rules
  Change-Id: Ia0974b01f1f02771dda545c4cfb5ff428a9327b4

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548510] [NEW] Metadata modal does not enforce limits

2016-02-22 Thread Justin Pomeroy
Public bug reported:

There are several panels that use the Update Metadata modal to allow
editing metadata (Instances, Images, Flavors).  This metadata widget
does not enforce limits on the number of metadata items or the length of
the keys and values.  Limits should be enforced and proper error
messages displayed to prevent the user from submitting invalid data.

** Affects: horizon
 Importance: Undecided
 Assignee: Justin Pomeroy (jpomero)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Justin Pomeroy (jpomero)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1548510

Title:
  Metadata modal does not enforce limits

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There are several panels that use the Update Metadata modal to allow
  editing metadata (Instances, Images, Flavors).  This metadata widget
  does not enforce limits on the number of metadata items or the length
  of the keys and values.  Limits should be enforced and proper error
  messages displayed to prevent the user from submitting invalid data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1548510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548507] [NEW] Asterisks are missin in image fields: name, format

2016-02-22 Thread Susan Tan
Public bug reported:

Note that the blue asterisk is missing in 2 fields in the "Launch image"
pop-up modal.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: forms image modal popup

** Attachment added: "Note that format, name fields are missing *."
   https://bugs.launchpad.net/bugs/1548507/+attachment/4578371/+files/so_far.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1548507

Title:
  Asterisks are missin in image fields: name, format

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Note that the blue asterisk is missing in 2 fields in the "Launch
  image" pop-up modal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1548507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536671] Re: libvirt detach_interface logs errors for network device not found after neutron network-vif-deleted event

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/270891
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b3624e00d0097dd7ccdf27e34d7351c0c97afea1
Submitter: Jenkins
Branch:master

commit b3624e00d0097dd7ccdf27e34d7351c0c97afea1
Author: Matt Riedemann 
Date:   Thu Jan 21 08:15:27 2016 -0800

libvirt: check for interface when detach_interface fails

When using Neutron and deleting an instance, we race against
deleting the domain and Neutron sending a vif-deleted event
which triggers a call to detach_interface. If the network
device is not found when we go to detach it from the config,
libvirt raises an error like:

libvirtError: operation failed: no matching network device was found

Unfortunately libvirt does not have a unique error code for this
and the error message is translatable, so we can't key off of it
to check if the failure is just due to the device not being found.

This change adds a method to the guest object to lookup the interface
device config by MAC address and if not found, we simply log a warning
rather than tracing an error for a case that we can expect when using
Neutron.

Closes-Bug: #1536671

Change-Id: I8ae352ff3eeb760c97d1a6fa9d7a59e881d7aea1


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1536671

Title:
  libvirt detach_interface logs errors for network device not found
  after neutron network-vif-deleted event

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I've started noticing a lot of these in the neutron job logs:

  http://logs.openstack.org/67/269867/4/check/gate-tempest-dsvm-neutron-
  src-os-brick/7617a9f/logs/screen-n-cpu.txt.gz#_2016-01-21_05_38_48_667

  2016-01-21 05:38:48.667 ERROR nova.virt.libvirt.driver 
[req-c8971f87-303e-460b-894b-7e2ccde9944f nova service] [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d] detaching network adapter failed.
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d] Traceback (most recent call last):
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1354, in 
detach_interface
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d] guest.detach_device(cfg, 
persistent=True, live=live)
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 341, in detach_device
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d] 
self._domain.detachDeviceFlags(conf.to_xml(), flags=flags)
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d] rv = execute(f, *args, **kwargs)
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d] six.reraise(c, e, tb)
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d] rv = meth(*args, **kwargs)
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 985, in 
detachDeviceFlags
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d] if ret == -1: raise libvirtError 
('virDomainDetachDeviceFlags() failed', dom=self)
  2016-01-21 05:38:48.667 12834 ERROR nova.virt.libvirt.driver [instance: 
d8d15c87-79cc-4b63-99bb-64dde4576b3d] 

[Yahoo-eng-team] [Bug 1548466] [NEW] Deprecate 'force_gateway_on_subnet' configuration option

2016-02-22 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/277303
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 3e6d602f542fc97e64cd5503bf1ea4e71d648abf
Author: Sreekumar S 
Date:   Mon Feb 8 12:58:01 2016 +0530

Deprecate 'force_gateway_on_subnet' configuration option

Currently 'force_gateway_on_subnet' configuration is set to True
by default and enforces the subnet on to the gateway. With the
fix in https://review.openstack.org/#/c/233287/, gateway outside
the subnet can be added, and the configuration option now has
lost its significance.

With this patch, the configuration option is deprecated.
It should be removed in Newton release, and the system should
always allow gateway outside the subnet.
This patch is dependent on the fix for adding gateway outside
the subnet, mentioned above.

DocImpact: 'force_gateway_on_subnet' description should be
updated in the docs and marked as deprecated to be removed in
the Newton release.

Change-Id: I28b3d7add303ee479fc071c1de142b0f7811e4e5
Closes-Bug: #1335023
Closes-Bug: #1398768

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548466

Title:
  Deprecate 'force_gateway_on_subnet' configuration option

Status in neutron:
  New

Bug description:
  https://review.openstack.org/277303
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 3e6d602f542fc97e64cd5503bf1ea4e71d648abf
  Author: Sreekumar S 
  Date:   Mon Feb 8 12:58:01 2016 +0530

  Deprecate 'force_gateway_on_subnet' configuration option
  
  Currently 'force_gateway_on_subnet' configuration is set to True
  by default and enforces the subnet on to the gateway. With the
  fix in https://review.openstack.org/#/c/233287/, gateway outside
  the subnet can be added, and the configuration option now has
  lost its significance.
  
  With this patch, the configuration option is deprecated.
  It should be removed in Newton release, and the system should
  always allow gateway outside the subnet.
  This patch is dependent on the fix for adding gateway outside
  the subnet, mentioned above.
  
  DocImpact: 'force_gateway_on_subnet' description should be
  updated in the docs and marked as deprecated to be removed in
  the Newton release.
  
  Change-Id: I28b3d7add303ee479fc071c1de142b0f7811e4e5
  Closes-Bug: #1335023
  Closes-Bug: #1398768

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2016-02-22 Thread Michal Dulko
I'm not 100% sure that this is the intent of this bug, but I do believe
that Cinder isn't dying when receiving SIGHUP signal [1], so I'm marking
it as invalid.

[1] https://review.openstack.org/#/c/279039/

** Changed in: cinder
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in Cinder:
  Invalid
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in Murano:
  Confirmed
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Invalid
Status in oslo.config:
  Fix Released
Status in oslo.log:
  Fix Released
Status in oslo.service:
  Fix Released
Status in Sahara:
  In Progress

Bug description:
  1)In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
     ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

  Currently Murano and Glance are out of sync with Oslo SIGHUP support.

  There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
  2)
  heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
  2.1) modify 
/site-packages//openstack/common/service.py
  def _sighup_supported():
  +LOG.warning("SIGHUP is supported: {0}".format(hasattr(signal, 'SIGHUP')))
  return hasattr(signal, 'SIGHUP')
  2.2) restart service foo-service-name and check logs for "SIGHUP is 
supported", if service  really supports it, the appropriate messages would be 
present in the logs.
  2.3) issue kill -HUP  and check logs for "SIGHUP is 
supported" and "Caught SIGHUP", if service  really supports it, the appropriate 
messages would be present in the logs. Besides that, the service should remain 
started and its main thread PID should not be changed.

  e.g.
  2.a) heat-engine supports HUPing:
  #service openstack-heat-engine restart
  <132>Apr 11 14:03:48 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True

  2.b)But heat-api don't know how to HUP:
  #service openstack-heat-api restart
  <134>Apr 11 14:06:22 node-3 heat-heat.api INFO: Starting Heat ReST API on 
0.0.0.0:8004
  <134>Apr 11 14:06:22 node-3 heat-eventlet.wsgi.server INFO: Starting single 
process server

  2.c) HUPing heat-engine is OK
  #pid=$(cat /var/run/heat/openstack-heat-engine.pid); kill -HUP $pid && echo 
$pid
  16512
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service INFO: Caught 
SIGHUP, exiting
  <132>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.rpc.common INFO: 
Connected to AMQP server on ...
  service openstack-heat-engine status
  openstack-heat-engine (pid  16512) is running...

  2.d) HUPed heat-api is dead now ;(
  #kill -HUP $(cat /var/run/heat/openstack-heat-api.pid)
  (no new logs)
  # service openstack-heat-api status
  openstack-heat-api dead but pid file exists

  3)
  nova-cert, nova-novncproxy, nova-objectstore, nova-consoleauth, 
nova-scheduler - unlike to case 2, after kill -HUP  command 
was issued, there would be a "Caught SIGHUP" message in the logs, BUT the 
associated service would have got dead anyway. Instead, the service should 
remain started and its main thread PID should not be changed (similar to the 
2.c case).

  So, looks like there are a lot of things still should be done to
  ensure POSIX standards abidance in Openstack :-)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546758] Re: Inconsistent ordering for angular table actions

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/281531
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e3c31b9b6a33d8785dc306d4a268fad164c1cdde
Submitter: Jenkins
Branch:master

commit e3c31b9b6a33d8785dc306d4a268fad164c1cdde
Author: Justin Pomeroy 
Date:   Wed Feb 17 15:14:49 2016 -0600

Maintain order when resolving promise list

This updates the $qExtensions.allSettled method so that it maintains
the order of the original list of promises. The list of passed and
failed promises will be in the same order as they were in the
original list.

Closes-Bug: #1546758
Change-Id: I9de0b68a16c4f3e2a9a34fb8862de2d77b4a19bb


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546758

Title:
  Inconsistent ordering for angular table actions

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The horizon angular actions service uses $qExtensions.allSettled when
  resolving permitted actions.  The allSettled method does not enforce
  that the order of the pass and fail promise arrays are the same as the
  original list of promises, and this can cause the order of the actions
  to be inconsistent.  The order of the actions is actually determined
  by the order in which they are resolved.  This causes actions I want
  to be last in the menu (Delete) to sometimes show up as the default
  button action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547279] Re: nova.tests.unit.compute.test_compute.ComputeTestCase.test_run_instance_queries_macs takes an average of 1 minute to run

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/282148
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=519f4560a845a998333228c23d7b7866b9a67a3b
Submitter: Jenkins
Branch:master

commit 519f4560a845a998333228c23d7b7866b9a67a3b
Author: Matt Riedemann 
Date:   Thu Feb 18 19:05:30 2016 -0800

Fixed leaked UnexpectedMethodCallErrors in test_compute

Both of these tests were actually leaking UnexpectedMethodCallErrors
from mox due to the allocate_for_instance call being stubbed out
but the kwargs were in the wrong order.

Apparently something with mox, the stub_out fixture, and
NetworkInfoAsyncWrapper using nova.utils.spawn cause the stubs
to be on a greenthread so when mox raises the error, it doesn't
fail the actual test, it just gets logged to stderr by eventlet.

In the case of the test_run_instance_queries_macs test, this was
actually causing the test to run over a minute (presumably because
it would eventually hit some timeout in the thread, like an rpc
timeout maybe?). With this fix it drops that down to around 1 second
to run the test.

We really need to figure out what's causing the stub to go on the
thread so that the test doesn't get the UnexpectedMethodCallError
and fail but that could come in a later change if someone can
figure it out.

Change-Id: Ibc5c881e17310304eb63f5df6360f8cb1657b807
Closes-Bug: #1547279


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1547279

Title:
  
nova.tests.unit.compute.test_compute.ComputeTestCase.test_run_instance_queries_macs
  takes an average of 1 minute to run

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://status.openstack.org//openstack-health/#/job/gate-nova-
  python27?groupKey=project=hour=P1M shows that
  the
  
nova.tests.unit.compute.test_compute.ComputeTestCase.test_run_instance_queries_macs
  unit test is taking around 1 minute to run.  It looks like most things
  should be stubbed out properly in that test so I'm not sure what's
  causing the holdup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1547279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548443] [NEW] Update ComputeNode values with disk allocation ratios in the RT

2016-02-22 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/277953
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit ad6654eaa7c44267ae3a4952a8359459fbec4c0c
Author: Sylvain Bauza 
Date:   Tue Feb 9 17:35:47 2016 +0100

Update ComputeNode values with disk allocation ratios in the RT

Now that we have added the field for persisting the disk alloc ratio, we can
have the ResouceTracker persisting it by adding it to the local ComputeNode
object which is persisted by calling the _update() method.
It will then send by default 0.0 unless the operator explicitely specified 
an
allocation ratio in the compute nova.conf.

Thanks to the ComputeNode object hydratation on the scheduler side, the 
facade
will make sure that if a default 0.0 is provided by either a compute node or
by the scheduler's nova.conf, it will actually get the original allocation
ratios (ie. 1.0 for disk)
Since the Scheduler reads the same RT opt but goes thru the ComputeNode 
object,
it will also get the Facade returning 1.0 unless the operator
explicitely provided other ratios for the scheduler's nova.conf

Amending the release note now that the behaviour is changing.

DocImpact Disk alloc ratio is now per computenode
UpgradeImpact

Change-Id: Ief6fa32429d58b80e70029ed67c7f42e0bdc986d
Implements: blueprint disk-allocation-ratio-to-rt

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1548443

Title:
  Update ComputeNode values with disk allocation ratios in the RT

Status in OpenStack Compute (nova):
  New

Bug description:
  https://review.openstack.org/277953
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit ad6654eaa7c44267ae3a4952a8359459fbec4c0c
  Author: Sylvain Bauza 
  Date:   Tue Feb 9 17:35:47 2016 +0100

  Update ComputeNode values with disk allocation ratios in the RT
  
  Now that we have added the field for persisting the disk alloc ratio, we 
can
  have the ResouceTracker persisting it by adding it to the local 
ComputeNode
  object which is persisted by calling the _update() method.
  It will then send by default 0.0 unless the operator explicitely 
specified an
  allocation ratio in the compute nova.conf.
  
  Thanks to the ComputeNode object hydratation on the scheduler side, the 
facade
  will make sure that if a default 0.0 is provided by either a compute node 
or
  by the scheduler's nova.conf, it will actually get the original allocation
  ratios (ie. 1.0 for disk)
  Since the Scheduler reads the same RT opt but goes thru the ComputeNode 
object,
  it will also get the Facade returning 1.0 unless the operator
  explicitely provided other ratios for the scheduler's nova.conf
  
  Amending the release note now that the behaviour is changing.
  
  DocImpact Disk alloc ratio is now per computenode
  UpgradeImpact
  
  Change-Id: Ief6fa32429d58b80e70029ed67c7f42e0bdc986d
  Implements: blueprint disk-allocation-ratio-to-rt

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1548443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548433] [NEW] neutron returns objects other than oslo_config.cfg.Opt instances from list_opts

2016-02-22 Thread Doug Hellmann
Public bug reported:

The neutron function for listing options for use with the configuration
generator returns things that are not compliant with the
oslo_config.cfg.Opt class API. At the very least this includes the
options from keystoneauth1, but I haven't looked to find if there are
others.

We'll work around this for now in the configuration generator code, but
in the future we will more strictly enforce the API compliance by
refusing to generate a configuration file or by leaving options out of
the output.

The change blocked by this issue is:
https://review.openstack.org/#/c/282435/5

One failure log showing the issue is:
http://logs.openstack.org/35/282435/5/check/gate-tempest-dsvm-neutron-
src-oslo.config/77044c6/logs/devstacklog.txt.gz

The neutron code triggering the issue is in:
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/opts.py#n279

The best solution would be to fix keystoneauth to support option
discovery natively using proper oslo.config Opts.

** Affects: keystoneauth
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

** Also affects: keystoneauth
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548433

Title:
  neutron returns objects other than oslo_config.cfg.Opt instances from
  list_opts

Status in keystoneauth:
  New
Status in neutron:
  New

Bug description:
  The neutron function for listing options for use with the
  configuration generator returns things that are not compliant with the
  oslo_config.cfg.Opt class API. At the very least this includes the
  options from keystoneauth1, but I haven't looked to find if there are
  others.

  We'll work around this for now in the configuration generator code,
  but in the future we will more strictly enforce the API compliance by
  refusing to generate a configuration file or by leaving options out of
  the output.

  The change blocked by this issue is:
  https://review.openstack.org/#/c/282435/5

  One failure log showing the issue is:
  http://logs.openstack.org/35/282435/5/check/gate-tempest-dsvm-neutron-
  src-oslo.config/77044c6/logs/devstacklog.txt.gz

  The neutron code triggering the issue is in:
  http://git.openstack.org/cgit/openstack/neutron/tree/neutron/opts.py#n279

  The best solution would be to fix keystoneauth to support option
  discovery natively using proper oslo.config Opts.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystoneauth/+bug/1548433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548386] [NEW] LBaaS v2: Unable to edit load balancer that has a listener

2016-02-22 Thread Justin Pomeroy
Public bug reported:

I have a devstack setup with LBaaS v2 using haproxy.  If I create a load
balancer and then edit it (change name or description) this works fine.
But if the load balancer has a listener and then I try to edit it, the
update works but the load balancer then goes into Error state.

Hopefully relevant error from the q-lbaasv2 screen log:
http://paste.openstack.org/show/487756/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548386

Title:
  LBaaS v2: Unable to edit load balancer that has a listener

Status in neutron:
  New

Bug description:
  I have a devstack setup with LBaaS v2 using haproxy.  If I create a
  load balancer and then edit it (change name or description) this works
  fine.  But if the load balancer has a listener and then I try to edit
  it, the update works but the load balancer then goes into Error state.

  Hopefully relevant error from the q-lbaasv2 screen log:
  http://paste.openstack.org/show/487756/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323975] Re: do not use default=None for config options

2016-02-22 Thread Dmitry Tantsur
** No longer affects: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323975

Title:
  do not use default=None for config options

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in Rally:
  Fix Committed
Status in Sahara:
  Fix Released
Status in tempest:
  In Progress
Status in Trove:
  In Progress
Status in zaqar:
  Fix Released

Bug description:
  In the cfg module default=None is set as the default value. It's not
  necessary to set it again when defining config options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1323975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512207] Re: Fix usage of assertions

2016-02-22 Thread Dmitry Tantsur
** No longer affects: ironic

** No longer affects: ironic-inspector

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1512207

Title:
  Fix usage of assertions

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Blazar:
  In Progress
Status in Cinder:
  Invalid
Status in congress:
  Fix Released
Status in Cue:
  Fix Released
Status in Glance:
  Won't Fix
Status in Group Based Policy:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in kuryr:
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Health:
  In Progress
Status in os-brick:
  Fix Released
Status in os-net-config:
  In Progress
Status in os-testr:
  In Progress
Status in oslo.cache:
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  Fix Released
Status in Rally:
  Fix Released
Status in requests-mock:
  In Progress
Status in Sahara:
  Fix Released
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in Stackalytics:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  In Progress
Status in Trove:
  Fix Released
Status in Vitrage:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  Manila  should use the specific assertion:

self.assertIsTrue/False(observed)

  instead of the generic assertion:

self.assertEqual(True/False, observed)

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1512207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1116433] Re: Wishlist: Stick tenant to availability zone

2016-02-22 Thread Sean Dague
This is definitely a feature. If someone wants to propose it via a spec
that's fine, but tracking as a wishlist bug won't get us anywhere.

** Changed in: nova
 Assignee: Thang Pham (thang-pham) => (unassigned)

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1116433

Title:
  Wishlist: Stick tenant to availability zone

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  You may imagine a situation, where you have a cloud with heterogeneous
  hardware configurations, some hosts might have Tesla adapter, some
  would have large and fast RAID array backend. You may separate them by
  availability zone, but you can't force a user to stick with a zone
  within his tenant.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1116433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456955] Re: tox -epep8 fails due to tox picking python 3.x

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/282590
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=05583d14966a366f0d37753b81c8f72c87126348
Submitter: Jenkins
Branch:master

commit 05583d14966a366f0d37753b81c8f72c87126348
Author: Sean Dague 
Date:   Fri Feb 19 21:10:42 2016 -0500

always use python2.7 for pep8

pep8 doesn't work with python3 on our codebase. If someone is on a
platform that defaults to python => python3, pep8 won't work for
them. This is actually really easy to fix with a single line in tox.

Change-Id: I7a888e4f7cc828638a9d61d2249a854ba1f3cb7b
Closes-Bug: #1456955


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456955

Title:
  tox -epep8 fails due to tox picking python 3.x

Status in Designate:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  karlsone@workstation:~/projects/dnsaas/designate$ tox -epep8
  pep8 create: /home/karlsone/projects/dnsaas/designate/.tox/pep8
  pep8 installdeps: 
-r/home/karlsone/projects/dnsaas/designate/requirements.txt, 
-r/home/karlsone/projects/dnsaas/designate/test-requirements.txt
  pep8 develop-inst: /home/karlsone/projects/dnsaas/designate
  pep8 runtests: PYTHONHASHSEED='0'
  pep8 runtests: commands[0] | flake8
  Traceback (most recent call last):
File ".tox/pep8/bin/flake8", line 9, in 
  load_entry_point('flake8==2.1.0', 'console_scripts', 'flake8')()
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/flake8/main.py",
 line 24, in main
  flake8_style = get_style_guide(parse_argv=True, 
config_file=DEFAULT_CONFIG)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/flake8/engine.py",
 line 79, in get_style_guide
  kwargs['parser'], options_hooks = get_parser()
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/flake8/engine.py",
 line 53, in get_parser
  parser_hook(parser)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/hacking/core.py",
 line 146, in add_options
  factory = pbr.util.resolve_name(local_check_fact)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/pbr/util.py",
 line 171, in resolve_name
  ret = __import__('.'.join(module_name), fromlist=[attr_name])
File "/home/karlsone/projects/dnsaas/designate/designate/__init__.py", line 
16, in 
  import eventlet
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/__init__.py",
 line 10, in 
  from eventlet import convenience
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/convenience.py",
 line 6, in 
  from eventlet.green import socket
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/green/socket.py",
 line 17, in 
  from eventlet.support import greendns
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/support/greendns.py",
 line 54, in 
  socket=_socket_nodns)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/patcher.py",
 line 119, in import_patched
  *additional_modules + tuple(kw_additional_modules.items()))
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/patcher.py",
 line 93, in inject
  module = __import__(module_name, {}, {}, module_name.split('.')[:-1])
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/dns/resolver.py",
 line 32, in 
  import dns.flags
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/dns/flags.py",
 line 51, in 
  _by_value = dict([(y, x) for x, y in _by_text.iteritems()])
  AttributeError: 'dict' object has no attribute 'iteritems'
  ERROR: InvocationError: 
'/home/karlsone/projects/dnsaas/designate/.tox/pep8/bin/flake8'
  

 summary 
_
  ERROR:   pep8: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1456955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548369] [NEW] horizon heat template upload fails to parse

2016-02-22 Thread Eric Peterson
Public bug reported:

This bug is not 100% reliable to reproduce, and occurs only when running
under wsgi / apache in my testing.  Developers will not typically see
this bug.

When wsgi is tuned to have multiple processes, the heat template upload
can have parse failures.  These errors do not show up in the logs, but
the user gets a vague encoding type error message.

When I tune wsgi to have a single process, this problem goes away.

This could be something with some double post or javascript type bug
too, I'm not sure.   Like I said, this bug also does not occur 100% of
the time.   Filing this information as it might be helpful to others.

** Affects: horizon
 Importance: Undecided
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1548369

Title:
  horizon heat template upload fails to parse

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  This bug is not 100% reliable to reproduce, and occurs only when
  running under wsgi / apache in my testing.  Developers will not
  typically see this bug.

  When wsgi is tuned to have multiple processes, the heat template
  upload can have parse failures.  These errors do not show up in the
  logs, but the user gets a vague encoding type error message.

  When I tune wsgi to have a single process, this problem goes away.

  This could be something with some double post or javascript type bug
  too, I'm not sure.   Like I said, this bug also does not occur 100% of
  the time.   Filing this information as it might be helpful to others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1548369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456955] Re: tox -epep8 fails due to tox picking python 3.x

2016-02-22 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456955

Title:
  tox -epep8 fails due to tox picking python 3.x

Status in Designate:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  karlsone@workstation:~/projects/dnsaas/designate$ tox -epep8
  pep8 create: /home/karlsone/projects/dnsaas/designate/.tox/pep8
  pep8 installdeps: 
-r/home/karlsone/projects/dnsaas/designate/requirements.txt, 
-r/home/karlsone/projects/dnsaas/designate/test-requirements.txt
  pep8 develop-inst: /home/karlsone/projects/dnsaas/designate
  pep8 runtests: PYTHONHASHSEED='0'
  pep8 runtests: commands[0] | flake8
  Traceback (most recent call last):
File ".tox/pep8/bin/flake8", line 9, in 
  load_entry_point('flake8==2.1.0', 'console_scripts', 'flake8')()
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/flake8/main.py",
 line 24, in main
  flake8_style = get_style_guide(parse_argv=True, 
config_file=DEFAULT_CONFIG)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/flake8/engine.py",
 line 79, in get_style_guide
  kwargs['parser'], options_hooks = get_parser()
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/flake8/engine.py",
 line 53, in get_parser
  parser_hook(parser)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/hacking/core.py",
 line 146, in add_options
  factory = pbr.util.resolve_name(local_check_fact)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/pbr/util.py",
 line 171, in resolve_name
  ret = __import__('.'.join(module_name), fromlist=[attr_name])
File "/home/karlsone/projects/dnsaas/designate/designate/__init__.py", line 
16, in 
  import eventlet
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/__init__.py",
 line 10, in 
  from eventlet import convenience
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/convenience.py",
 line 6, in 
  from eventlet.green import socket
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/green/socket.py",
 line 17, in 
  from eventlet.support import greendns
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/support/greendns.py",
 line 54, in 
  socket=_socket_nodns)
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/patcher.py",
 line 119, in import_patched
  *additional_modules + tuple(kw_additional_modules.items()))
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/eventlet/patcher.py",
 line 93, in inject
  module = __import__(module_name, {}, {}, module_name.split('.')[:-1])
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/dns/resolver.py",
 line 32, in 
  import dns.flags
File 
"/home/karlsone/projects/dnsaas/designate/.tox/pep8/lib/python3.4/site-packages/dns/flags.py",
 line 51, in 
  _by_value = dict([(y, x) for x, y in _by_text.iteritems()])
  AttributeError: 'dict' object has no attribute 'iteritems'
  ERROR: InvocationError: 
'/home/karlsone/projects/dnsaas/designate/.tox/pep8/bin/flake8'
  

 summary 
_
  ERROR:   pep8: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1456955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414517] Re: Zookeeper servicegroup API driver is not tested and apparently not usable either

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/246343
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=337a1b029a1f144f577a78712413a4182dd525f8
Submitter: Jenkins
Branch:master

commit 337a1b029a1f144f577a78712413a4182dd525f8
Author: Mark McLoughlin 
Date:   Tue Nov 17 10:17:44 2015 +

servicegroup: remove the zookeeper driver

We have had a "untested and risky to use in production" log warning
message for this driver since Kilo, it is currently broken (see below),
there are no obviously active users or contributors, and we are planning
on enabling Zookeeper usage by adopting the tooz library.

bug #1443910 shows that the driver fails to load because eventlet 0.17
broke evzookeeper by moving _SocketDuckForFd from eventlet.greenio to
eventlet.greenio.py2 in commit 449c90a. The 0.17 release was in Feb,
2015. The evzookeeper library hasn't been updated since Sep 2012 and the
sole maintainer is the original author of the zookeeper servicegroup
driver.

The tooz driver spec - Ibf70c2dbe308fc8e4dd277d8c75c4445b3dfce90 -
proposes a formal deprecation period for the zk driver, during which
existing zk driver users are encouraged to move to tooz. However, given
the state of the zk driver, we must conclude that there are no existing
users who need a graceful migration path. Removing the driver will
avoid potential confusion for new users and simplify the path to
adopting tooz.

Closes-Bug: #1443910
Closes-Bug: #1414517
Closes-Bug: #1414536

Signed-off-by: Mark McLoughlin 
Change-Id: I2dba71e71b1ed7cf8476e8bfe9481e84be5df128


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414517

Title:
  Zookeeper servicegroup API driver is not tested and apparently not
  usable either

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The current UNIT tests for the Zookeeper servicegroup API driver are
  never run in any of the gates, since it would require installation of
  a number of Python libraries, one of which would require C headers
  (zkpython).

  When I tried to follow the instructions on the Zookeeper driver unit
  test:

  You need to install ZooKeeper locally and related dependencies
  to run the test. It's unclear how to install python-zookeeper lib
  in venv so you might have to run the test without it.

  To set up in Ubuntu 12.04:
  $ sudo apt-get install zookeeper zookeeperd python-zookeeper
  $ sudo pip install evzookeeper
  $ nosetests nova.tests.unit.servicegroup.test_zk_driver

  The steps above did not work. The evzookeeper PIP install never
  completes, due to the following error:

  jaypipes@minty:~/repos/openstack/nova$ sudo pip install evzookeeper
  Traceback (most recent call last):
File "/usr/bin/pip", line 9, in 
  load_entry_point('pip==1.5.4', 'console_scripts', 'pip')()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 351, in 
load_entry_point
  return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2363, in 
load_entry_point
  return ep.load()
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2088, in load
  entry = __import__(self.module_name, globals(),globals(), ['__name__'])
File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 11, in 

  from pip.vcs import git, mercurial, subversion, bazaar  # noqa
File "/usr/lib/python2.7/dist-packages/pip/vcs/mercurial.py", line 9, in 

  from pip.download import path_to_url
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 25, in 

  from requests.compat import IncompleteRead
  ImportError: cannot import name IncompleteRead

  After doing some digging, it looks like the Python + Zookeeper
  community has shifted its focus to the Kazoo library:

  https://kazoo.readthedocs.org/en/latest/

  And our own community has switched focuses to the Tooz distributed
  lock management library. So, I propose that we mark the existing ZK
  driver in Nova as deprecated, with a note that we're not sure it ever
  worked to begin with.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414536] Re: Zookeeper servicegroup driver's get_all() erroneously raises ServiceGroupUnavailable

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/246343
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=337a1b029a1f144f577a78712413a4182dd525f8
Submitter: Jenkins
Branch:master

commit 337a1b029a1f144f577a78712413a4182dd525f8
Author: Mark McLoughlin 
Date:   Tue Nov 17 10:17:44 2015 +

servicegroup: remove the zookeeper driver

We have had a "untested and risky to use in production" log warning
message for this driver since Kilo, it is currently broken (see below),
there are no obviously active users or contributors, and we are planning
on enabling Zookeeper usage by adopting the tooz library.

bug #1443910 shows that the driver fails to load because eventlet 0.17
broke evzookeeper by moving _SocketDuckForFd from eventlet.greenio to
eventlet.greenio.py2 in commit 449c90a. The 0.17 release was in Feb,
2015. The evzookeeper library hasn't been updated since Sep 2012 and the
sole maintainer is the original author of the zookeeper servicegroup
driver.

The tooz driver spec - Ibf70c2dbe308fc8e4dd277d8c75c4445b3dfce90 -
proposes a formal deprecation period for the zk driver, during which
existing zk driver users are encouraged to move to tooz. However, given
the state of the zk driver, we must conclude that there are no existing
users who need a graceful migration path. Removing the driver will
avoid potential confusion for new users and simplify the path to
adopting tooz.

Closes-Bug: #1443910
Closes-Bug: #1414517
Closes-Bug: #1414536

Signed-off-by: Mark McLoughlin 
Change-Id: I2dba71e71b1ed7cf8476e8bfe9481e84be5df128


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1414536

Title:
  Zookeeper servicegroup driver's get_all() erroneously raises
  ServiceGroupUnavailable

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The Zookeeper servicegroup API driver raises
  nova.exception.ServiceGroupUnavailable when there are no members in UP
  state for a group. However, the other two drivers for memcache and DB,
  return an empty list. Since the Zookeeper driver actually calls its
  own get_all() method in its is_up() method, there's actually no way
  the Zookeeper driver was working correctly, since if
  ServiceGroupUnavailable was raised from is_up(), things would go
  haywire in many places.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1414536/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443910] Re: Zookeeper servicegroup driver crashes

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/246343
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=337a1b029a1f144f577a78712413a4182dd525f8
Submitter: Jenkins
Branch:master

commit 337a1b029a1f144f577a78712413a4182dd525f8
Author: Mark McLoughlin 
Date:   Tue Nov 17 10:17:44 2015 +

servicegroup: remove the zookeeper driver

We have had a "untested and risky to use in production" log warning
message for this driver since Kilo, it is currently broken (see below),
there are no obviously active users or contributors, and we are planning
on enabling Zookeeper usage by adopting the tooz library.

bug #1443910 shows that the driver fails to load because eventlet 0.17
broke evzookeeper by moving _SocketDuckForFd from eventlet.greenio to
eventlet.greenio.py2 in commit 449c90a. The 0.17 release was in Feb,
2015. The evzookeeper library hasn't been updated since Sep 2012 and the
sole maintainer is the original author of the zookeeper servicegroup
driver.

The tooz driver spec - Ibf70c2dbe308fc8e4dd277d8c75c4445b3dfce90 -
proposes a formal deprecation period for the zk driver, during which
existing zk driver users are encouraged to move to tooz. However, given
the state of the zk driver, we must conclude that there are no existing
users who need a graceful migration path. Removing the driver will
avoid potential confusion for new users and simplify the path to
adopting tooz.

Closes-Bug: #1443910
Closes-Bug: #1414517
Closes-Bug: #1414536

Signed-off-by: Mark McLoughlin 
Change-Id: I2dba71e71b1ed7cf8476e8bfe9481e84be5df128


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443910

Title:
  Zookeeper servicegroup driver crashes

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Zookeeper driver is based on zookeeper and evzookeeper modules. The latter
  is the source of nasty crash, which is well visible on nova conductor. To
  reproduce it is enough to enable zookeeper in nova.conf, provide
  configuration for the zookeeper service address and stack the thing. The
  traceback:

  2015-04-14 13:23:22.622 TRACE nova Traceback (most recent call last):
  2015-04-14 13:23:22.622 TRACE nova   File "/usr/local/bin/nova-conductor", 
line 10, in 
  2015-04-14 13:23:22.622 TRACE nova sys.exit(main())
  2015-04-14 13:23:22.622 TRACE nova   File 
"/opt/stack/nova/nova/cmd/conductor.py", line 44, in main
  2015-04-14 13:23:22.622 TRACE nova manager=CONF.conductor.manager)
  2015-04-14 13:23:22.622 TRACE nova   File "/opt/stack/nova/nova/service.py", 
line 277, in create
  2015-04-14 13:23:22.622 TRACE nova db_allowed=db_allowed)
  2015-04-14 13:23:22.622 TRACE nova   File "/opt/stack/nova/nova/service.py", 
line 146, in __init__
  2015-04-14 13:23:22.622 TRACE nova self.servicegroup_api = 
servicegroup.API(db_allowed=db_allowed)
  2015-04-14 13:23:22.622 TRACE nova   File 
"/opt/stack/nova/nova/servicegroup/api.py", line 76, in __init__
  2015-04-14 13:23:22.622 TRACE nova *args, **kwargs)
  2015-04-14 13:23:22.622 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 38, in 
import_object
  2015-04-14 13:23:22.622 TRACE nova return import_class(import_str)(*args, 
**kwargs)
  2015-04-14 13:23:22.622 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 27, in 
import_class
  2015-04-14 13:23:22.622 TRACE nova __import__(mod_str)
  2015-04-14 13:23:22.622 TRACE nova   File 
"/opt/stack/nova/nova/servicegroup/drivers/zk.py", line 28, in 
  2015-04-14 13:23:22.622 TRACE nova evzookeeper = 
importutils.try_import('evzookeeper')
  2015-04-14 13:23:22.622 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 71, in 
try_import
  2015-04-14 13:23:22.622 TRACE nova return import_module(import_str)
  2015-04-14 13:23:22.622 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line 57, in 
import_module
  2015-04-14 13:23:22.622 TRACE nova __import__(import_str)
  2015-04-14 13:23:22.622 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/evzookeeper/__init__.py", line 26, in 

  2015-04-14 13:23:22.622 TRACE nova from evzookeeper import utils
  2015-04-14 13:23:22.622 TRACE nova   File 
"/usr/local/lib/python2.7/dist-packages/evzookeeper/utils.py", line 26, in 

  2015-04-14 13:23:22.622 TRACE nova class 
_SocketDuckForFdTimeout(greenio._SocketDuckForFd):
  2015-04-14 13:23:22.622 TRACE nova AttributeError: 'module' object has no 
attribute '_SocketDuckForFd'

  The root cause of the problem is the change, which have a place in module
  eventlet 0.17, which 

[Yahoo-eng-team] [Bug 1547544] Re: heat: MessagingTimeout: Timed out waiting for a reply to message ID

2016-02-22 Thread Sean Dague
>From looking at the dstat output, the node in question is above load avg
of 11 for nearly 2 hours, about an hour into it is where your error
happens.

Realistically, that's just too much work being asked of the node. We
have found in the gate that once you get sustained load average over 10
things start to break down. There is no bug fix for this, it's just a
fallout of our architecture.

Marking as won't fix, as I don't think there is anything actionable
here. If you have performance improvements in your environment that make
this better, that's great. However there are bounds in which the nova
compute worker just does fail over, and there is not much to be done
about it.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1547544

Title:
  heat: MessagingTimeout: Timed out waiting for a reply to message ID

Status in OpenStack Compute (nova):
  Won't Fix
Status in oslo.messaging:
  New

Bug description:
  Setup:

  Single controller[48 GB RAM, 16vCPU, 120GB Disk]
  3 Network Nodes
  100 ESX hypervisors distributed in 10 nova-compute nodes

  Test:

  1. Create /16 network
  2. Heat template which which will launch 100 instances on network created 
step 1
  3. Create 10 stack back2back so that we reach 1000 instances without waiting 
for previous stack to complete

  Observation:

  stack creations are failing while nova run_periodic_tasks at different
  places like _heal_instance_info_cache,  _sync_scheduler_instance_info,
  _update_available_resource etc

  Have attached sample heat template, heat logs, nova compute log from
  one of the host.

  
  Logs:

  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2016-02-19 04:21:54.691 TRACE nova.compute.manager return f(*args, 
**kwargs)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 553, in 
_update_available_resource
  2016-02-19 04:21:54.691 TRACE nova.compute.manager context, self.host, 
self.nodename)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
174, in wrapper
  2016-02-19 04:21:54.691 TRACE nova.compute.manager args, kwargs)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 240, in 
object_class_action_versions
  2016-02-19 04:21:54.691 TRACE nova.compute.manager args=args, 
kwargs=kwargs)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
  2016-02-19 04:21:54.691 TRACE nova.compute.manager retry=self.retry)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
  2016-02-19 04:21:54.691 TRACE nova.compute.manager timeout=timeout, 
retry=retry)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 465, in send
  2016-02-19 04:21:54.691 TRACE nova.compute.manager retry=retry)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 454, in _send
  2016-02-19 04:21:54.691 TRACE nova.compute.manager result = 
self._waiter.wait(msg_id, timeout)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 337, in wait
  2016-02-19 04:21:54.691 TRACE nova.compute.manager message = 
self.waiters.get(msg_id, timeout=timeout)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 239, in get
  2016-02-19 04:21:54.691 TRACE nova.compute.manager 'to message ID %s' % 
msg_id)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager MessagingTimeout: Timed 
out waiting for a reply to message ID a87a7f358a0948efa3ab5beb0c8f45e7
  --

  
  stack@esx-compute-9:/opt/stack/nova$ git log -1
  commit d51c5670d8d26e989d92eb29658eed8113034c0f
  Merge: 4fade90 30d5d80
  Author: Jenkins 
  Date:   Thu Feb 18 17:56:32 2016 +

  Merge "reset task_state after select_destinations failed."
  stack@esx-compute-9:/opt/stack/nova$

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1547544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460177] Re: Support metadata service with IPv6-only tenant network

2016-02-22 Thread James Page
** No longer affects: neutron (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460177

Title:
  Support metadata service with IPv6-only tenant network

Status in neutron:
  Incomplete

Bug description:
  EC2 metatdata service is supported by nova metadata service that is
  running in the management network. Cloud-init running in the instance
  normally accesses the service at 169.254.169.254. Cloud-init can be
  configured with metadata_urls other than the default
  http://169.254.169.254 to access the service. But such configuration
  is not currently supported by openstack.  In order for the instance to
  access the nova metadata service, neutron provides proxy service that
  terminates http://169.254.169.254 and forwards the request to the nova
  metadata service, and responds back to the instance. Apparently, this
  works only when IPv4 is available in the tenant network. For an
  IPv6-only tenant work, to continue the support of this service, the
  instance has to access it at an IPv6 address. This requires
  enhancement in Neutron to support it.

  A few options have been discussed so far:
     -- define a well-known ipv6 link-local address to access the metadata 
service.
     -- enhance IPv6 RA to advertise the metadata service endpoint to 
instances. This would require standards work and enhance cloud-init to support 
it.
     -- define a well-known name for the metadata service and configure 
metadata_urls to use the name.  The name will be resolved to a datacenter 
specific IP address. The corresponding DNS record should be pre-provisioned in 
the datacenter DNS server for the instance to resolve the name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548289] [NEW] Scrubber exits with 0 even though it didn't work

2016-02-22 Thread Vincent Untz
Public bug reported:

glance-scrubber fails to work with:

08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
[req-3200d95d-a3d7-4347-a4a3-9e00dc2fbc53 glance service - - -] Registry client 
request GET /images/detail raised AuthUrlNotFound
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
Traceback (most recent call last):
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
File "/usr/lib/python2.7/site-packages/glance/registry/client/v1/client.py", 
line 121, in do_request
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
  **kwargs)
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
File "/usr/lib/python2.7/site-packages/glance/common/client.py", line 71, in 
wrapped
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
  return func(self, *args, **kwargs)
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
File "/usr/lib/python2.7/site-packages/glance/common/client.py", line 367, in 
do_request
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
  self._authenticate()
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
File "/usr/lib/python2.7/site-packages/glance/common/client.py", line 345, in 
_authenticate
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
  auth_plugin.authenticate()
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
File "/usr/lib/python2.7/site-packages/glance/common/auth.py", line 131, in 
authenticate
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
  _authenticate(auth_url)
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
File "/usr/lib/python2.7/site-packages/glance/common/auth.py", line 125, in 
_authenticate
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
  self._v1_auth(token_url)
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
File "/usr/lib/python2.7/site-packages/glance/common/auth.py", line 188, in 
_v1_auth
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client   
  raise exception.AuthUrlNotFound(url=token_url)
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
AuthUrlNotFound: Auth service at URL http://10.164.0.2:5000/v3/tokens not found.
08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
08:52:43 2016-02-21 07:52:43.011 16387 ERROR glance.scrubber 
[req-3200d95d-a3d7-4347-a4a3-9e00dc2fbc53 glance service - - -] Can not get 
scrub jobs from queue: Auth service at URL http://10.164.0.2:5000/v3/tokens not 
found.

(this was due to using keystone v3 API -- but ignore the error itself,
just consider that it failed for some reason)

However, when I run it, the exit code of the process is still 0, while
it should be 1.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1548289

Title:
  Scrubber exits with 0 even though it didn't work

Status in Glance:
  New

Bug description:
  glance-scrubber fails to work with:

  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
[req-3200d95d-a3d7-4347-a4a3-9e00dc2fbc53 glance service - - -] Registry client 
request GET /images/detail raised AuthUrlNotFound
  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
Traceback (most recent call last):
  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
  File "/usr/lib/python2.7/site-packages/glance/registry/client/v1/client.py", 
line 121, in do_request
  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
**kwargs)
  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
  File "/usr/lib/python2.7/site-packages/glance/common/client.py", line 71, in 
wrapped
  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
return func(self, *args, **kwargs)
  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
  File "/usr/lib/python2.7/site-packages/glance/common/client.py", line 367, in 
do_request
  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
self._authenticate()
  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
  File "/usr/lib/python2.7/site-packages/glance/common/client.py", line 345, in 
_authenticate
  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
auth_plugin.authenticate()
  08:52:43 2016-02-21 07:52:43.009 16387 ERROR glance.registry.client.v1.client 
  File 

[Yahoo-eng-team] [Bug 1548285] [NEW] l3 HA network management is racey

2016-02-22 Thread Kevin Benton
Public bug reported:

The logic surrounding the creation of the L3 HA network doesn't handle
races where the network could be deleted after its existence is checked
for. It also doesn't handle the case where the network doesn't exist but
another creation happens before it gets to create the network.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New


** Tags: l3-ha

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Tags added: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548285

Title:
  l3 HA network management is racey

Status in neutron:
  New

Bug description:
  The logic surrounding the creation of the L3 HA network doesn't handle
  races where the network could be deleted after its existence is
  checked for. It also doesn't handle the case where the network doesn't
  exist but another creation happens before it gets to create the
  network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548278] [NEW] rabbitmq q-agent-notifier-port-update_fanout_91e5c8311b1b47a2b39ede94dad9a56b is blocked

2016-02-22 Thread youyunyehe
Public bug reported:

Queue q-agent-notifier-port-
update_fanout_91e5c8311b1b47a2b39ede94dad9a56b be blocked ( please refer
to the attached picture)

Version: RabbitMQ 3.6.0 release, openstack kilo

This phenomeno comes up sometimes when in the large-scale environment,
when one rabbitmq message-queue be created,if no consume binded to it but
the producers publish messages to queue continuously, then the queue will not 
be dropped!
If I want the queue which hasn't been binded with consumers or producers to be 
dropped,

how can I do?

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron rabbitmq

** Attachment added: 
"q-agent-notifier-port-update_fanout_91e5c8311b1b47a2b39ede94dad9a56b"
   
https://bugs.launchpad.net/bugs/1548278/+attachment/4577899/+files/rabbitmq_blocked_fanoutqueue.bmp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548278

Title:
  rabbitmq q-agent-notifier-port-
  update_fanout_91e5c8311b1b47a2b39ede94dad9a56b is blocked

Status in neutron:
  New

Bug description:
  Queue q-agent-notifier-port-
  update_fanout_91e5c8311b1b47a2b39ede94dad9a56b be blocked ( please
  refer to the attached picture)

  Version: RabbitMQ 3.6.0 release, openstack kilo

  This phenomeno comes up sometimes when in the large-scale environment,
  when one rabbitmq message-queue be created,if no consume binded to it but
  the producers publish messages to queue continuously, then the queue will not 
be dropped!
  If I want the queue which hasn't been binded with consumers or producers to 
be dropped,

  how can I do?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548274] [NEW] rabbitmq message-queue not be dropped which not binded with consumers

2016-02-22 Thread youyunyehe
Public bug reported:

Queue q-agent-notifier-port-update_fanout_91e5c8311b1b47a2b39ede94dad9a56b be 
blocked ( please refer to the attached  picture)
 
Version: RabbitMQ 3.6.0 release, openstack kilo 

This phenomeno comes up sometimes when in the large-scale environment,
when one rabbitmq message-queue be created,if no consume binded to it but 
the producers publish messages to queue continuously, then the queue will not 
be dropped!
If I want the queue which hasn't been binded with consumers or producers to be 
dropped,
how can I do?

** Affects: neutron
 Importance: Undecided
 Status: New

** Attachment added: 
"q-agent-notifier-port-update_fanout_91e5c8311b1b47a2b39ede94dad9a56b"
   
https://bugs.launchpad.net/bugs/1548274/+attachment/4577894/+files/rabbitmq_blocked_fanoutqueue.bmp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548274

Title:
  rabbitmq message-queue not be dropped which not binded with consumers

Status in neutron:
  New

Bug description:
  Queue q-agent-notifier-port-update_fanout_91e5c8311b1b47a2b39ede94dad9a56b be 
blocked ( please refer to the attached  picture)
   
  Version: RabbitMQ 3.6.0 release, openstack kilo 

  This phenomeno comes up sometimes when in the large-scale environment,
  when one rabbitmq message-queue be created,if no consume binded to it but 
  the producers publish messages to queue continuously, then the queue will not 
be dropped!
  If I want the queue which hasn't been binded with consumers or producers to 
be dropped,
  how can I do?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326901] Re: ServiceBinaryExists - binary for nova-conductor already exists

2016-02-22 Thread Sean Dague
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326901

Title:
  ServiceBinaryExists - binary for nova-conductor already exists

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Trusty:
  Triaged
Status in nova source package in Utopic:
  Fix Released

Bug description:
  We're hitting an intermittent issue where ServiceBinaryExists is
  raised for nova-conductor on deployment.

  From nova-conductor's upstart log ( /var/log/upstart/nova-
  conductor.log ):

  2014-05-15 12:02:25.206 34494 INFO nova.openstack.common.periodic_task [-] 
Skipping periodic task _periodic_update_dns because its interval is negative
  2014-05-15 12:02:25.241 34494 INFO nova.openstack.common.service [-] Starting 
8 workers
  2014-05-15 12:02:25.242 34494 INFO nova.openstack.common.service [-] Started 
child 34501
  2014-05-15 12:02:25.244 34494 INFO nova.openstack.common.service [-] Started 
child 34502
  2014-05-15 12:02:25.246 34494 INFO nova.openstack.common.service [-] Started 
child 34503
  2014-05-15 12:02:25.246 34501 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.247 34502 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.247 34494 INFO nova.openstack.common.service [-] Started 
child 34504
  2014-05-15 12:02:25.249 34503 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.251 34504 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.254 34505 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.250 34494 INFO nova.openstack.common.service [-] Started 
child 34505
  2014-05-15 12:02:25.261 34494 INFO nova.openstack.common.service [-] Started 
child 34506
  2014-05-15 12:02:25.263 34494 INFO nova.openstack.common.service [-] Started 
child 34507
  2014-05-15 12:02:25.266 34494 INFO nova.openstack.common.service [-] Started 
child 34508
  2014-05-15 12:02:25.267 34507 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.268 34506 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  2014-05-15 12:02:25.271 34508 AUDIT nova.service [-] Starting conductor node 
(version 2014.1)
  
/usr/lib/python2.7/dist-packages/nova/openstack/common/db/sqlalchemy/session.py:379:
 DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
match = pattern.match(integrity_error.message)
  
/usr/lib/python2.7/dist-packages/nova/openstack/common/db/sqlalchemy/session.py:379:
 DeprecationWarning: BaseException.message has been deprecated as of Python 2.6
match = pattern.match(integrity_error.message)
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 346, in 
fire_timers
  timer()
File "/usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 56, in 
__call__
  cb(*args, **kw)
File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, 
in main
  2014-05-15 12:02:25.862 34502 ERROR oslo.messaging._drivers.impl_rabbit [-] 
AMQP server on localhost:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying 
again in 1 seconds.
  result = function(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py", 
line 480, in run_service
  service.start()
File "/usr/lib/python2.7/dist-packages/nova/service.py", line 172, in start
  self.service_ref = self._create_service_ref(ctxt)
File "/usr/lib/python2.7/dist-packages/nova/service.py", line 224, in 
_create_service_ref
  service = self.conductor_api.service_create(context, svc_values)
File "/usr/lib/python2.7/dist-packages/nova/conductor/api.py", line 202, in 
service_create
  return self._manager.service_create(context, values)
File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 966, in wrapper
  return func(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/conductor/manager.py", line 
461, in service_create
  svc = self.db.service_create(context, values)
File "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 139, in 
service_create
  return IMPL.service_create(context, values)
File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 
146, in wrapper
  return f(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 
521, in service_create
  binary=values.get('binary'))
  ServiceBinaryExists: Service with host glover binary nova-conductor exists.
  2014-05-15 12:02:25.864 34503 ERROR nova.openstack.common.threadgroup [-] 
Service with host 

[Yahoo-eng-team] [Bug 1507528] Re: Create sample data for policy.v3cloudsample.json

2016-02-22 Thread Steve Martinelli
The change was abandoned and no new comments. I do not believe this is a
bug. We don't really support having the sample data in our tools folder
today. There is no way to test that this new file will stay up to date.

** Changed in: keystone
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1507528

Title:
  Create sample data for policy.v3cloudsample.json

Status in OpenStack Identity (keystone):
  Opinion

Bug description:
  It is useful that there is sample data for policy.v3cloudsample.json.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1507528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433687] Re: devstack logs do not contain pid information for log messages with context

2016-02-22 Thread Davanum Srinivas (DIMS)
** No longer affects: oslo.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1433687

Title:
  devstack logs do not contain pid information for log messages with
  context

Status in devstack:
  In Progress
Status in neutron:
  Fix Released

Bug description:
  Compare:

  2015-03-18 15:00:15.990 INFO neutron.wsgi 
[req-412094f3-6b4e-41e8-9f2b-833ff6b3ee7a SecurityGroupsTestJSON-724004567 
SecurityGroupsTestJSON-664869352] 127.0.0.1 - - [18/Mar/2015 15:00:15] "DELETE 
/v2.0/security-groups/9cc93b9a-2d06-46e6-9160-1521683f13f9.json HTTP/1.1" 204 
149 0.060949
  2015-03-18 15:00:16.001 15709 INFO neutron.wsgi [-] (15709) accepted 
('127.0.0.1', 60381)

  This is because in devstack, we override the default log format string
  with the one that misses the info. Note that to make it work, it is
  not enough to fall back to default string, since it uses user_identity
  context field that is missing in neutron context object. That is
  because neutron.context.Context does not rely on oslo_context.Context
  when transforming it to_dict().

  The proper fix would be:

  - make neutron context reuse oslo_context.Context.to_dict()
  - make devstack not overwrite the default log format string

  Also note that log colorizer from devstack also rewrites the default
  format string value. In that case, we just need to update the string
  to include pid information.

  Also note that the issue may be more far reaching, since devstack
  rewrites the string for other services too (nova, ironic, among
  others).

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1433687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405772] Re: Don't use dict constructor with a sequence of length-2 sequences

2016-02-22 Thread Sean Dague
These kind of micro optimizations aren't relevant to our codebase.
Please never submit patches for things like this.

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405772

Title:
  Don't use dict constructor with a sequence of length-2 sequences

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  EP-0274 introduced dict comprehensions to replace dict constructor
  with a sequence of length-2 sequences, these are benefits copied
  from [1]:
The dictionary constructor approach has two distinct disadvantages
from the proposed syntax though.  First, it isn't as legible as a
dict comprehension.  Second, it forces the programmer to create an
in-core list object first, which could be expensive.
  There is deep dive about PEP-0274[2] and basic tests about
  performance[3].

  [1]http://legacy.python.org/dev/peps/pep-0274/
  
[2]http://doughellmann.com/2012/11/12/the-performance-impact-of-using-dict-instead-of-in-cpython-2-7-2.html
  [3]http://paste.openstack.org/show/154798/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1405772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526620] Re: host names are different even if scheduling servers on the same host

2016-02-22 Thread Sean Dague
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526620

Title:
  host names are different even if scheduling servers on the same host

Status in devstack:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Even if creating a server with scheduler_hint: same_host, the server
  is created on different host on the gate.

  'OS-EXT-SRV-ATTR:host' value of "show a server" API is different
  between servers like:

   * ubuntu-trusty-2-node-rax-iad-6591881
   * ubuntu-trusty-2-node-rax-iad-6591881-77157

  Now we are trying to add Tempest test for verifying the scheduler_hint on 
https://review.openstack.org/#/c/257660
  However the test cannot be passed due to this problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1526620/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522573] Re: tox fails to set environment variables set by user on invocation or in session

2016-02-22 Thread Sean Dague
I believe we fixed this with a new target

** Changed in: nova
   Status: Confirmed => Fix Committed

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522573

Title:
  tox fails to set environment variables set by user on invocation or in
  session

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When running GENERATE_SAMPLES=True tox -e functional, the
  GENERATE_SAMPLES value is still set to False (the default). I also
  tried setting it in my session before running tox and still got False.

  To reproduce:

  Add a line to api_sample_base.py [1] to print the value of 
os.getenv('GENERATE_SAMPLES') to stdout.
  Run GENERATE_SAMPLES=True tox -e functional
  GENERATE_SAMPLES outputs as False

  Workaround:

  Hard-code the value "True" into api_sample_base.py for the value of
  self.generate_samples, just don't accidentally commit it ;)

  [1]
  
https://github.com/openstack/nova/blob/1734ce7101982dd95f8fab1ab4815bd258a33744/nova/tests/functional/api_sample_tests/api_sample_base.py#L76

  I have not tested this with other environment variables to find out if
  this is true across the board.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1522573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500689] Re: Use IPOpt to validate IP addresses

2016-02-22 Thread Steve Martinelli
Change was abandoned because using IPOpt would break folks, this is not
a bug. Let's continue to use StringOpt for our hostnames.

** Changed in: keystone
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1500689

Title:
  Use IPOpt to validate IP addresses

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  We can use cfg.IPOpt to validate IP addresses.
  I think it would be helpful in common/config.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1500689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415592] Re: Instance DHCP Request are Sometimes Answered by Remote Dnsmasq

2016-02-22 Thread Sean Dague
*** This bug is a duplicate of bug 1318104 ***
https://bugs.launchpad.net/bugs/1318104

** This bug has been marked a duplicate of bug 1318104
   dhcp isolation via iptables does not work

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415592

Title:
  Instance DHCP Request are Sometimes Answered by Remote Dnsmasq

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When an instance sends out a DHCPDISCOVER or DHCPREQUEST, it sometimes
  leaves the compute node and gets answered by Dnsmasq running on
  another compute node. The remote Dnsmasq will always respond with
  "DHCPNAK no address available" (to DHCPDISCOVER) or "DHCPNAK address
  not available" (to DHCPREQUEST), because it doesn't have an entry for
  that instance in its config file.

  Syslog:
  Jan 28 15:31:04 x dnsmasq-dhcp[10454]: DHCPREQUEST(brxxx) 192.168.0.x 
12:34:56:78:90:ab
  Jan 28 15:31:04 x dnsmasq-dhcp[10454]: DHCPNAK(brxxx) 192.168.0.x 
12:34:56:78:90:ab address not available

  Expected Behaviour:
  According to blueprint (https://review.openstack.org/#/c/16578/), when 
share_dhcp_address is set to true, the dhcp messages should be firewalled using 
iptables and ebtables

  Environment:
  - Icehouse 2014.1.3
  - Ubuntu 14.04
  - Multihost mode
  - Multiple compute nodes
  - nova-network Vlan-Manager
  - dnsmasq version 2.68
  - nova-compute and nova-network run on the same node (other services run on 
other nodes)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1415592/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1433805] Re: Absence of python-ironicclient in nova requirements.txt making upgrades awkward, python-ironicclient features diffucult

2016-02-22 Thread Sean Dague
** Changed in: nova/juno
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1433805

Title:
  Absence of python-ironicclient in nova requirements.txt making
  upgrades awkward, python-ironicclient features diffucult

Status in Ironic:
  Won't Fix
Status in OpenStack Compute (nova):
  Opinion
Status in OpenStack Compute (nova) juno series:
  Won't Fix

Bug description:
  Nova's requirements.txt does not list python-ironicclient, meaning a
  stable/juno nova deployment (at least in our gate) will be running
  with the most recent release of python-ironicclient.

  Many new features have been added to Ironic since juno and have been
  introduced incrementally via API micro-versions.   The client library
  released at the time of stable/juno did not send any API version
  header. The current (kilo) server recognizes this and defaults to the
  lowest API version (v1.1) it supports. The desired behavior of python-
  ironicclient is for it to request the greatest API version it
  understands (presently 1.6) [3].

  The nova.virt.ironic driver in juno/stable depends on node states only
  available in the corresponding version [1] of Ironic.  These have
  changed since then and the new node states are exposed via new API
  micro-versions [2]. Using a new client library with a new server
  release will result in the new states being returned to Nova. In
  particular, the state of a node that is available for use, as returned
  by the v1.1 API is "NOSTATE", and as returned by the current Kilo API,
  is "AVAILABLE".

  The goal is to make the client transparently negotiate which version
  to use with the Ironic server if the latest version is not supported.
  This is a feature that would be introduced in a future python-
  ironicclient release.

  However, since Nova is not listing python-ironicclient in its
  requirements, during upgrades we can end up with a stable/juno Nova
  using this new client version to speak to a Kilo Ironic server via the
  most recent API micro versions. This would result in nova driver
  errors as the Ironic server would be returning node states that
  stable/juno driver [1] does not understand [2].

  We either need to introduce python-ironicclient as a listed
  requirement of Nova (at least in stable), or explicitly declare that
  the driver use the older API version in its client interactions, or
  require that operators upgrade Nova (and python-ironicclient) to Kilo
  before upgrading Ironic.

  [1] 
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/ironic/ironic_states.py?h=stable%2Fjuno
  [2] 
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/ironic/ironic_states.py
  [3] 
http://specs.openstack.org/openstack/ironic-specs/specs/kilo/api-microversions.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1433805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548195] [NEW] Click on project details throwing out from Horizon

2016-02-22 Thread Liron Kuchlani
Public bug reported:

Description of problem:
Click on project details through 'Identity' tab, causes to logout from Horizon.
It happens when creating a new project and creating a new user as member user 
to that project   


Version-Release number of selected component (if applicable):
python-django-horizon-8.0.1-1.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1. Login to Horizon with an admin user
2. Create a project 
3. Create a new user for that project as member user
4. Logout from Horizon
5. Login to Horizon with the new created user 
6. Click on 'Identity' -> 'Project' -> 'the-new-project(project details)'
7. Try once more to login to Horizon with the new created user  

Actual results:
1. Click on <'the-new-project'> causes to logout from Horizon
2. Failure to login to Horizon with the new created user


Expected results:
1. Project details should be displayed 
2. Login to Horizon with the new created user should succeed

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1548195

Title:
   Click on project details throwing out from Horizon

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  Click on project details through 'Identity' tab, causes to logout from 
Horizon.
  It happens when creating a new project and creating a new user as member user 
to that project   

  
  Version-Release number of selected component (if applicable):
  python-django-horizon-8.0.1-1.el7ost.noarch

  How reproducible:
  100%

  Steps to Reproduce:
  1. Login to Horizon with an admin user
  2. Create a project 
  3. Create a new user for that project as member user
  4. Logout from Horizon
  5. Login to Horizon with the new created user 
  6. Click on 'Identity' -> 'Project' -> 'the-new-project(project details)'
  7. Try once more to login to Horizon with the new created user  

  Actual results:
  1. Click on <'the-new-project'> causes to logout from Horizon
  2. Failure to login to Horizon with the new created user

  
  Expected results:
  1. Project details should be displayed 
  2. Login to Horizon with the new created user should succeed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1548195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537626] Re: `glance location-update` deletes locations and backend images

2016-02-22 Thread Flavio Percoco
** Also affects: glance/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1537626

Title:
  `glance location-update` deletes locations and backend images

Status in Glance:
  In Progress
Status in Glance liberty series:
  New
Status in python-glanceclient:
  In Progress

Bug description:
  Hi all,

  I am having trouble using `glance location-update --url 
  --metadata ` to update the metadata of a location. When
  I try run the command, the locations become a blank list and the image
  is deleted from my backend (swift+https). I have traced it down to the
  following:

  When doing a location-update, glanceclient actually sends two patch
  commands:

  [{'op': 'replace', 'path': '/locations', 'value': []},
   {'op': 'replace',
    'path': '/locations',
    'value': [{u'metadata': {u'key': 'value'},
  {u'url': u'swift+https://image1'}]}]

  This is due to a note in python-
  glanceclient/glanceclient/v2/images.py, update_location():

  # NOTE: The server (as of now) doesn't support modifying individual
  # location entries. So we must:
  #   1. Empty existing list of locations.
  #   2. Send another request to set 'locations' to the new list
  #  of locations.

  However, at the server end, the _do_replace_locations() function which
  handles this call, actually deletes the locations and images when it
  gets the first call with the empty values
  (glance/glance/api/v2/images.py) ???

  def _do_replace_locations(self, image, value):
  if len(image.locations) > 0 and len(value) > 0:
  msg = _("Cannot replace locations from a non-empty "
  "list to a non-empty list.")
  raise webob.exc.HTTPBadRequest(explanation=msg)
  if len(value) == 0:
  # NOTE(zhiyan): this actually deletes the location
  # from the backend store.
  del image.locations[:]
  if image.status == 'active':
  image.status = 'queued'

  This seems to result in the first call deleting all the locations from
  the backend store, and the second call throwing an error because there
  is no location any more.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1537626/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548217] [NEW] Revert the unused code for address scope

2016-02-22 Thread Hong Hui Xiao
Public bug reported:

This bug is to revert the code in [1] , which is not used by address
scope finally.


[1] https://review.openstack.org/#/c/192032/

** Affects: neutron
 Importance: Undecided
 Assignee: Hong Hui Xiao (xiaohhui)
 Status: New


** Tags: address-scopes

** Changed in: neutron
 Assignee: (unassigned) => Hong Hui Xiao (xiaohhui)

** Tags added: address-scopes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548217

Title:
  Revert the unused code for address scope

Status in neutron:
  New

Bug description:
  This bug is to revert the code in [1] , which is not used by address
  scope finally.

  
  [1] https://review.openstack.org/#/c/192032/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548267] [NEW] openstack token issue failed for an OpenLDAP user

2016-02-22 Thread Alexandre Carnal
Public bug reported:

When issuing the command "openstack token issue", for an openldap user,
the command returns: Could not find user: 

Looking at the keystone log, openstack token issue command search the
user into sql.

The env variable for the openldapuser:
export OS_PROJECT_DOMAIN_ID=d02cc542b9c741999bc7addda943c701
export OS_PROJECT_DOMAIN=gvadc
export OS_USER_DOMAIN=gvadc
export OS_PROJECT_NAME=ibmcloud
export OS_USERNAME=acarnal
export OS_PASSWORD=My$up€rPa$$w0rd
export OS_AUTH_URL=http://srv-horizon01-p:5000/v3
export OS_IDENTITY_API_VERSION=3

Keystone logs:
2016-02-22 11:32:20.385 26133 INFO keystone.common.wsgi 
[req-b77c4fae-5176-4751-9154-da61749a1a1c - - - - -] POST 
http://srv-horizon01-p:5000/v3/auth/tokens
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core 
[req-b77c4fae-5176-4751-9154-da61749a1a1c - - - - -] Could not find user: 
acarnal
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core Traceback (most 
recent call last):
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/keystone/auth/plugins/core.py", line 175, in 
_validate_and_normalize_auth_data
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core user_name, 
domain_ref['id'])
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 433, in 
wrapper
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core return 
f(self, *args, **kwargs)
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 444, in 
wrapper
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core return 
f(self, *args, **kwargs)
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1040, in 
decorate
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core 
should_cache_fn)
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 651, in 
get_or_create
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core 
async_creator) as value:
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 158, in 
__enter__
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core return 
self._enter()
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 98, in _enter
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core generated = 
self._enter_create(createdtime)
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 149, in 
_enter_create
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core created = 
self.creator()
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 619, in 
gen_value
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core 
created_value = creator()
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/dogpile/cache/region.py", line 1036, in 
creator
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core return 
fn(*arg, **kw)
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/keystone/identity/core.py", line 868, in 
get_user_by_name
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core ref = 
driver.get_user_by_name(user_name, domain_id)
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core   File 
"/usr/lib/python2.7/site-packages/keystone/identity/backends/sql.py", line 145, 
in get_user_by_name
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core raise 
exception.UserNotFound(user_id=user_name)
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core UserNotFound: 
Could not find user: acarnal
2016-02-22 11:32:20.406 26133 ERROR keystone.auth.plugins.core 
2016-02-22 11:32:20.408 26133 WARNING keystone.common.wsgi 
[req-b77c4fae-5176-4751-9154-da61749a1a1c - - - - -] Authorization failed. 
Could not find user: acarnal (Disable debug mode to suppress these details.) 
(Disable debug mode to suppress these details.) from 192.168.6.30
Could not find user: acarnal (Disable debug mode to suppress these details.) 
(HTTP 401) (Request-ID: req-b77c4fae-5176-4751-9154-da61749a1a1c)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1548267

Title:
  openstack token issue failed for an OpenLDAP 

[Yahoo-eng-team] [Bug 1299517] Re: quota-class-update

2016-02-22 Thread Sean Dague
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1299517

Title:
   quota-class-update

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in python-novaclient:
  Fix Released

Bug description:
  Cant update default quota:
  root@blade1-1-live:~# nova --debug quota-class-update --ram -1 default

  
  REQ: curl -i 
'http://XXX.XXX.XXX.XXX:8774/v2/1eaf475499f8479d94d5ed7a4af68703/os-quota-class-sets/default'
 -X PUT -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H 
"Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: 62837311542a42a495442d911cc8b12a" -d '{"quota_class_set": 
{"ram": -1}}'

  New session created for: (http://XXX.XXX.XXX.XXX:8774)
  INFO (connectionpool:258) Starting new HTTP connection (1): XXX.XXX.XXX.XXX
  DEBUG (connectionpool:375) Setting read timeout to 600.0
  DEBUG (connectionpool:415) "PUT 
/v2/1eaf475499f8479d94d5ed7a4af68703/os-quota-class-sets/default HTTP/1.1" 404 
52
  RESP: [404] CaseInsensitiveDict({'date': 'Sat, 29 Mar 2014 17:17:32 GMT', 
'content-length': '52', 'content-type': 'text/plain; charset=UTF-8'})
  RESP BODY: 404 Not Found

  The resource could not be found.


  DEBUG (shell:777) Not found (HTTP 404)
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 774, in 
main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 710, in 
main
  args.func(self.cs, args)
File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py", line 
3378, in do_quota_class_update
  _quota_update(cs.quota_classes, args.class_name, args)
File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py", line 
3164, in _quota_update
  manager.update(identifier, **updates)
File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/quota_classes.py", 
line 44, in update
  'quota_class_set')
File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 165, in 
_update
  _resp, body = self.api.client.put(url, body=body)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 289, in 
put
  return self._cs_request(url, 'PUT', **kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 260, in 
_cs_request
  **kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 242, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 236, in 
request
  raise exceptions.from_response(resp, body, url, method)
  NotFound: Not found (HTTP 404)
  ERROR: Not found (HTTP 404)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1299517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544133] Re: Cant choose the interface do delete from router on curvature network topology

2016-02-22 Thread Itxaka Serrano
Superseeded by bug: 1548224

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1544133

Title:
   Cant choose the interface do delete from router on curvature network
  topology

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Description of problem:
  ===
  Cant choose the interface do delete from router on curvature network topology

  Version-Release number of selected component:
  =
  python-django-horizon-8.0.0-10.el7ost.noarch
  openstack-dashboard-8.0.0-10.el7ost.noarch

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. Create private network
  2. Create external network
  3. Create router and connect it both networks 
  4. From network topology click no the network icon
  5. Click 'Delete Interface'

  Actual results:
  ===
  Cant choose the interface to delete

  Expected results:
  =
  User can choose the interface to delete from the router

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1544133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526138] Re: xenserver driver lacks of linux bridge qbrXXX

2016-02-22 Thread Sean Dague
This is a parity issue, and probably really a blueprint

** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

** Tags added: xenserver

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526138

Title:
  xenserver driver lacks of linux bridge qbrXXX

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  1. Nova latest master branch, should be Mitaka with next release

  2. XenServer as compute driver in OpenStack lacks of linux bridge when
  using neutron networking and thus it cannot support neutron security
  group as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-02-22 Thread Davanum Srinivas (DIMS)
** No longer affects: oslo.privsep

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  In Progress
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in gce-api:
  In Progress
Status in Glance:
  In Progress
Status in glance_store:
  In Progress
Status in Gnocchi:
  In Progress
Status in heat:
  Fix Released
Status in heat-cfntools:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  In Progress
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  In Progress
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  In Progress
Status in openstack-ansible:
  In Progress
Status in oslo.cache:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  In Progress
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in shaker:
  In Progress
Status in Solum:
  Fix Released
Status in tempest:
  In Progress
Status in tripleo:
  In Progress
Status in trove-dashboard:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293743] Re: Make importing of "local dependencies" consistent

2016-02-22 Thread Sean Dague
** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293743

Title:
  Make importing of "local dependencies" consistent

Status in OpenStack Compute (nova):
  Won't Fix
Status in OpenStack Core Infrastructure:
  New

Bug description:
  This bug was spurred by a conversation resulting from questions
  arising from https://review.openstack.org/#/c/80741/:

  http://paste.openstack.org/show/73678/

  There are a number of places in Nova where a submodule depends on an
  external library, but that external library is not (for various
  reasons) in the global requirements file. Examples of these kind of
  external "local dependencies" include:

  * libvirt (used in nova.virt.libvirt)
  * guestfs (used in nova.virt.disk.vfs.guestfs)
  * evzookeeper, zookeeper, and evzookeeper.membership
  * iboot (nova.virt.baremetal)

  We should develop some documentation (in HACKING?) that discusses the
  appropriate way to import these "local dependencies", and then ensure
  each one in above list is done consistently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513538] Re: Remove SQL's datetime format inplace of integer timestamps

2016-02-22 Thread Steve Martinelli
Marking this as invalid since the change was abandoned by the bug
originator. Thanks Lance :)

** Changed in: keystone
 Assignee: Lance Bragstad (lbragstad) => (unassigned)

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1513538

Title:
  Remove SQL's datetime format inplace of integer timestamps

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Keystone's current schema uses SQL's DATETIME format. Depending on the
  version of SQL (before or after v5.6.4), it may or may not support
  sub-second accuracy/precision.

  > A DATETIME or TIMESTAMP value can include a trailing fractional
  seconds part in up to microseconds (6 digits) precision. In
  particular, as of MySQL 5.6.4, any fractional part in a value inserted
  into a DATETIME or TIMESTAMP column is stored rather than discarded.

  Source: https://dev.mysql.com/doc/refman/5.6/en/datetime.html

  We should replace keystone's use of DATETIME with an integer
  timestamp. With integer timestamps we can support sub-second accuracy
  regardless of the version of SQL being used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1513538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547563] [NEW] Liberty Neutron server RPC worker threads loses endpoints

2016-02-22 Thread Rachappa B Goni
Public bug reported:

High level description:

When we restart the DHCP agent, we see the RPC exceptions. However,
further RPC retries are successful.

The cause for the problem found to be that the RPC worker threads keep
losing endpoints at neutron server. This also causes DHCP port creation
failure intermittently and increased latency.

DHCP Agent logs:
2016-02-19 09:20:25.721 46335 INFO neutron.agent.dhcp.agent [-] Synchronizing 
state
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent [-] Unable to sync 
network state.
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 
157, in sync_state
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent 
active_networks = self.plugin_rpc.get_active_networks_info()
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent   File 
"/opt/neutron/lib/python2.7/site-packages/neutron/agent/dhcp/agent.py", line 
421, in get_active_networks_info
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent host=self.host)
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent   File 
"/opt/neutron/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 
158, in call
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent 
retry=self.retry)
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent   File 
"/opt/neutron/lib/python2.7/site-packages/oslo_messaging/transport.py", line 
90, in _send
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent 
timeout=timeout, retry=retry)
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent   File 
"/opt/neutron/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py",
 line 466, in send
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent retry=retry)
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent   File 
"/opt/neutron/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py",
 line 457, in _send
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent raise result
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent RemoteError: 
Remote error: UnsupportedVersion Endpoint does not support RPC version 1.1. 
Attempted method: get_active_networks_info
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent [u'Traceback (most 
recent call last):\n', u'  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 143, in _dispatch_and_reply\nexecutor_callback))\n', u'  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", 
line 198, in _dispatch\nraise UnsupportedVersion(version, 
method=method)\n', u'UnsupportedVersion: Endpoint does not support RPC version 
1.1. Attempted method: get_active_networks_info\n'].
2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent 
2016-02-19 09:20:30.731 46335 INFO neutron.agent.dhcp.agent [-] Synchronizing 
state
2016-02-19 09:20:30.798 46335 INFO neutron.agent.dhcp.agent [-] Synchronizing 
state complete


Neutron Server side RPC threads endpoints dump:
2016-02-19 12:42:43.445 20786 DEBUG oslo_messaging.rpc.dispatcher [-] 
endpoints= ([]) 
_dispatch 
/opt/neutron/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py:174

2016-02-19 12:42:43.508 20785 DEBUG oslo_messaging.rpc.dispatcher [-]
endpoints= ([]) _dispatch /opt/neutron/lib/python2.7/site-
packages/oslo_messaging/rpc/dispatcher.py:174

2016-02-19 12:42:48.520 20786 DEBUG oslo_messaging.rpc.dispatcher [-]
endpoints= ([, , ]) _dispatch /opt/neutron/lib/python2.7/site-
packages/oslo_messaging/rpc/dispatcher.py:174


Packages info:
neutron (7.0.3)
oslo.concurrency (3.4.0)
oslo.config (3.6.0)
oslo.context (2.0.0)
oslo.db (4.3.1)
oslo.messaging (4.1.0)
oslo.middleware (3.5.0)
oslo.serialization (2.3.0)
oslo.utils (3.5.0)

Neutron Server Configuration:
   RPC worker threads = 2
   API worker threads = 4

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1547563

Title:
  Liberty Neutron server RPC worker threads loses endpoints

Status in neutron:
  New

Bug description:
  High level description:

  When we restart the DHCP agent, we see the RPC exceptions. However,
  further RPC retries are successful.

  The cause for the problem found to be that the RPC worker threads keep
  losing endpoints at neutron server. This also causes DHCP port
  creation failure intermittently and increased latency.

  DHCP Agent logs:
  2016-02-19 09:20:25.721 46335 INFO neutron.agent.dhcp.agent [-] Synchronizing 
state
  2016-02-19 09:20:25.730 46335 ERROR neutron.agent.dhcp.agent [-] Unable to 
sync network state.
  2016-02-19 09:20:25.730 46335 ERROR 

[Yahoo-eng-team] [Bug 1291489] Re: list-secgroup fail if no secgroups defined for server

2016-02-22 Thread Sean Dague
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291489

Title:
  list-secgroup fail if no secgroups defined for server

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  No issues if there are atleast 1 secgroup defined for the server.

  If no secgroups are defined for the server, it fails with 400 error.

  $ nova --debug list-secgroup vp25q00cs-osfe11b124f4.isg.apple.com
  .
  .
  .
  RESP: [400] CaseInsensitiveDict({'date': 'Wed, 12 Mar 2014 17:08:11 GMT', 
'content-length': '141', 'content-type': 'application/json; charset=UTF-8', 
'x-compute-request-id': 'req-20cb1b69-a69c-435c-9e85-3eec2fb2ae61'})
  RESP BODY: {"badRequest": {"message": "The server could not comply with the 
request since it is either malformed or otherwise incorrect.", "code": 400}}

  DEBUG (shell:740) The server could not comply with the request since it is 
either malformed or otherwise incorrect. (HTTP 400) (Request-ID: 
req-20cb1b69-a69c-435c-9e85-3eec2fb2ae61)
  Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/novaclient/shell.py", line 737, in 
main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File "/Library/Python/2.7/site-packages/novaclient/shell.py", line 673, in 
main
  args.func(self.cs, args)
File "/Library/Python/2.7/site-packages/novaclient/v1_1/shell.py", line 
1904, in do_list_secgroup
  groups = server.list_security_group()
File "/Library/Python/2.7/site-packages/novaclient/v1_1/servers.py", line 
328, in list_security_group
  return self.manager.list_security_group(self)
File "/Library/Python/2.7/site-packages/novaclient/v1_1/servers.py", line 
883, in list_security_group
  base.getid(server), 'security_groups', SecurityGroup)
File "/Library/Python/2.7/site-packages/novaclient/base.py", line 61, in 
_list
  _resp, body = self.api.client.get(url)
File "/Library/Python/2.7/site-packages/novaclient/client.py", line 229, in 
get
  return self._cs_request(url, 'GET', **kwargs)
File "/Library/Python/2.7/site-packages/novaclient/client.py", line 213, in 
_cs_request
  **kwargs)
File "/Library/Python/2.7/site-packages/novaclient/client.py", line 195, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
File "/Library/Python/2.7/site-packages/novaclient/client.py", line 189, in 
request
  raise exceptions.from_response(resp, body, url, method)
  BadRequest: The server could not comply with the request since it is either 
malformed or otherwise incorrect. (HTTP 400) (Request-ID: 
req-20cb1b69-a69c-435c-9e85-3eec2fb2ae61)
  ERROR: The server could not comply with the request since it is either 
malformed or otherwise incorrect. (HTTP 400) (Request-ID: 
req-20cb1b69-a69c-435c-9e85-3eec2fb2ae61)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229520] Re: No neutron code coverage for v3 security groups

2016-02-22 Thread Sean Dague
** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229520

Title:
  No neutron code coverage for v3 security groups

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  No neutron code coverage for v3 security groups

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1229520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547678] [NEW] Service plugins loaded on startup don't extend API resources

2016-02-22 Thread Boden R
Public bug reported:

As part of [1] and [2], support was added to load service plugins at
start-up.

Specifically [1] added the 'auto-allocated-topology' (aka 'get me a
network') as a default service plugin for neutron.

However, this auto-allocated-topology when loaded as a default service
plugin does not appear to extend the network resource it defines [3];
adding a 'is_default' attribute to networks.

Therefore there's currently no way to set a network as default:
---
neutron --debug net-create default-ext-net --router:external=True --is-default 
True
...
DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://10.34.232.114:9696/v2.0/networks.json -H "User-Agent: 
python-neutronclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}8020291c3aa0de969eddc411f24d1cc47749e3af" -d '{"network": {"is_default": 
"True", "router:external": "True", "name": "default-ext-net", "admin_state_up": 
true}}'
DEBUG: keystoneauth.session RESP: [400] Date: Fri, 19 Feb 2016 21:35:26 GMT 
Connection: keep-alive Content-Type: application/json; charset=UTF-8 
Content-Length: 111 X-Openstack-Request-Id: 
req-f4cf54a0-a0d0-4d8d-88d5-a050f897678e
RESP BODY: {"NeutronError": {"message": "Unrecognized attribute(s) 
'is_default'", "type": "HTTPBadRequest", "detail": ""}}

DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": {"message": 
"Unrecognized attribute(s) 'is_default'", "type": "HTTPBadRequest", "detail": 
""}}
ERROR: neutronclient.shell Unrecognized attribute(s) 'is_default'
...
---

Note -- that from my logs I can see the extension being loaded:

2016-02-19 13:08:41.848 8202 DEBUG neutron.manager [-] Successfully
loaded auto-allocated-topology plugin. Description: Auto Allocated
Topology - aka get me a network. _load_service_plugins
/opt/stack/neutron/neutron/manager.py:207


Perhaps the extension manager extend_resources() is not called in the call-flow 
of loading default service plugins?


[1] https://review.openstack.org/#/c/273439/10
[2] https://bugs.launchpad.net/neutron/+bug/1544383
[3] 
https://github.com/openstack/neutron/blob/master/neutron/extensions/auto_allocated_topology.py#L36

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1547678

Title:
  Service plugins loaded on startup don't extend API resources

Status in neutron:
  New

Bug description:
  As part of [1] and [2], support was added to load service plugins at
  start-up.

  Specifically [1] added the 'auto-allocated-topology' (aka 'get me a
  network') as a default service plugin for neutron.

  However, this auto-allocated-topology when loaded as a default service
  plugin does not appear to extend the network resource it defines [3];
  adding a 'is_default' attribute to networks.

  Therefore there's currently no way to set a network as default:
  ---
  neutron --debug net-create default-ext-net --router:external=True 
--is-default True
  ...
  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://10.34.232.114:9696/v2.0/networks.json -H "User-Agent: 
python-neutronclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}8020291c3aa0de969eddc411f24d1cc47749e3af" -d '{"network": {"is_default": 
"True", "router:external": "True", "name": "default-ext-net", "admin_state_up": 
true}}'
  DEBUG: keystoneauth.session RESP: [400] Date: Fri, 19 Feb 2016 21:35:26 GMT 
Connection: keep-alive Content-Type: application/json; charset=UTF-8 
Content-Length: 111 X-Openstack-Request-Id: 
req-f4cf54a0-a0d0-4d8d-88d5-a050f897678e
  RESP BODY: {"NeutronError": {"message": "Unrecognized attribute(s) 
'is_default'", "type": "HTTPBadRequest", "detail": ""}}

  DEBUG: neutronclient.v2_0.client Error message: {"NeutronError": {"message": 
"Unrecognized attribute(s) 'is_default'", "type": "HTTPBadRequest", "detail": 
""}}
  ERROR: neutronclient.shell Unrecognized attribute(s) 'is_default'
  ...
  ---

  Note -- that from my logs I can see the extension being loaded:

  2016-02-19 13:08:41.848 8202 DEBUG neutron.manager [-] Successfully
  loaded auto-allocated-topology plugin. Description: Auto Allocated
  Topology - aka get me a network. _load_service_plugins
  /opt/stack/neutron/neutron/manager.py:207

  
  Perhaps the extension manager extend_resources() is not called in the 
call-flow of loading default service plugins?


  [1] https://review.openstack.org/#/c/273439/10
  [2] https://bugs.launchpad.net/neutron/+bug/1544383
  [3] 
https://github.com/openstack/neutron/blob/master/neutron/extensions/auto_allocated_topology.py#L36

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1547678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team

[Yahoo-eng-team] [Bug 1484578] Re: eventlet.tpool.execute() causes creation of unnecessary OS native threads when running unit tests

2016-02-22 Thread Sean Dague
** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484578

Title:
  eventlet.tpool.execute() causes creation of unnecessary OS native
  threads when running unit tests

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  To cooperate with blocking calls, which can't be monkey-patched,  eventlet 
provides support for wrapping those into native OS threads by the means of 
eventlet.tpool module. E.g. nova-compute uses it extensively to make sure calls 
to libvirt does not block the whole process.
  
  When used in unit tests, eventlet.tpool creates a pool of 20 native OS 
threads per test running process (assuming there was at least one unit test to 
actually execute this part of the code in this process).
  
  In unit tests all blocking calls (like calls to libvirt) are monkey-patched 
anyway, so there is little sense to wrap those by the means of tpool.execute() 
(as we don't want to test eventlet either).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1484578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323975] Re: do not use default=None for config options

2016-02-22 Thread Julien Danjou
** Changed in: gnocchi
   Status: Fix Committed => Fix Released

** Changed in: gnocchi
Milestone: None => 2.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323975

Title:
  do not use default=None for config options

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in Rally:
  Fix Committed
Status in Sahara:
  Fix Released
Status in tempest:
  In Progress
Status in Trove:
  In Progress
Status in zaqar:
  Fix Released

Bug description:
  In the cfg module default=None is set as the default value. It's not
  necessary to set it again when defining config options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1323975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437992] Re: policy file in policy.d will be reloaded every rest api call

2016-02-22 Thread Sean Dague
This is addressed via oslo.policy

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437992

Title:
  policy file in policy.d will be reloaded every rest api call

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.policy:
  Fix Released
Status in oslo.policy kilo series:
  Fix Released

Bug description:
  the policy file in policy.d will be reloaded every time when do a rest
  api call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1437992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483132] Re: ssh-keygen-to-Paramiko change breaks third-party tools

2016-02-22 Thread Sean Dague
I do feel like private key format is not part of the nova contract. I'm
sorry Go tools are so limitted with what they support. The right path is
working with upstream paramiko on this.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483132

Title:
  ssh-keygen-to-Paramiko change breaks third-party tools

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Changing ssh key generation from OpenSSH's ssh-keygen to the Paramiko
  library [1][2] changed (unintentionally?) the ASN.1 encoding format of
  SSH private keys from DER to BER.  (DER is a strict subset of BER, so
  anything that can read BER can read DER, but not necessarily the other
  way around.)

  Some third-party tools only support DER and this has created at least
  one issue [3] (specifically because Go's standard library only
  supports DER).

  I have provided Paramiko with a small change that makes its SSH
  private key output equal to OpenSSH's ssh-keygen output (and
  presumably DER formatted) [4].

  Providing a change to Paramiko is just one method of addressing this
  backwards-incompatibility and interoperability issue.  Should the
  Paramiko change be accepted the unit test output vectors will need to
  be changed, but should it not, is a reversion of or modification to
  Nova acceptable to maintain backwards-compatibility and
  interoperability?

  [1] https://review.openstack.org/157931
  [2] 
http://git.openstack.org/cgit/openstack/nova/commit/?id=3f3f9bf22efd2fb209d2a2fe0246f4857cd2d21a
  [3] https://github.com/mitchellh/packer/issues/2526
  [4] https://github.com/paramiko/paramiko/pull/572

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546230] Re: change the default to the nocloud data source

2016-02-22 Thread Ben Howard
Nack'ing this request. After discussing this idea with the server team
offline, this is not desirable. For Snappy and images, livecd-rootfs
and/or the image build process should drive the image specific cloud-
init configuation.

** Changed in: cloud-init
   Status: New => Invalid

** Changed in: cloud-init
 Assignee: (unassigned) => Ben Howard (utlemming)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1546230

Title:
  change the default to the nocloud data source

Status in cloud-init:
  Invalid

Bug description:
  Today, having cloud-init instead defaults to a data source that might
  not be the correct one, ideally cloud-init should have a sane default
  of the common denominator which would be a nocloud data source.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1546230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545101] Re: Nova Metadata server in Mitaka can not work with Liberty config

2016-02-22 Thread Armando Migliaccio
** No longer affects: grenade

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545101

Title:
  Nova Metadata server in Mitaka can not work with Liberty config

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  http://logs.openstack.org/59/265759/24/experimental/gate-grenade-dsvm-
  neutron-
  
multinode/8f1deec/logs/new/screen-n-api.txt.gz?level=INFO#_2016-02-12_16_28_16_860

  2016-02-12 16:28:16.860 20168 INFO nova.metadata.wsgi.server [-] Traceback 
(most recent call last):
File "/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py", line 470, 
in handle_one_response
  result = self.application(self.environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/paste/urlmap.py", line 216, in 
__call__
  return app(environ, start_response)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in 
__call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in 
call_func
  return self.func(req, *args, **kwargs)
File "/opt/stack/new/nova/nova/api/ec2/__init__.py", line 32, in __call__
  return webob.exc.HTTPException(message=_DEPRECATION_MESSAGE)
  TypeError: __init__() takes exactly 3 arguments (2 given)

  This only shows up in the gate-grenade-dsvm-neutron-multinode job
  which is not running the n-api-meta service but is running the neutron
  metadata service, which has a bunch of warnings because it's not
  getting valid responses back from the nova metadata API (b/c it's not
  running):

  http://logs.openstack.org/59/265759/24/experimental/gate-grenade-dsvm-
  neutron-multinode/8f1deec/logs/new/screen-q-meta.txt.gz?level=TRACE

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1545101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464298] Re: default hash function and hash format changed in OpenSSH 6.8 (ssh-keygen)

2016-02-22 Thread Sean Dague
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464298

Title:
  default hash function and hash format changed in OpenSSH 6.8 (ssh-
  keygen)

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The following tests fail on Fedora 22 because ssh-keygen output
  changed in OpenSSH 6.8:

  * nova.tests.unit.api.ec2.test_cloud.CloudTestCase.test_import_key_pair
  * nova.tests.unit.compute.test_keypairs.ImportKeypairTestCase.test_success_ssh

  Before OpenSSH used MD5 and hex with colons to display a fingerprint.
  It now uses SHA256 encoded to base64:

  """
   * Add FingerprintHash option to ssh(1) and sshd(8), and equivalent
 command-line flags to the other tools to control algorithm used
 for key fingerprints. The default changes from MD5 to SHA256 and
 format from hex to base64.
  """
  http://www.openssh.com/txt/release-6.8

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335023] Re: Neutron fails to create external network gateway when gateway's IP in different subnet with br-ex

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/233287
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b6126bc0f17f348ff6303b9bd6041cb018479ef9
Submitter: Jenkins
Branch:master

commit b6126bc0f17f348ff6303b9bd6041cb018479ef9
Author: Sreekumar S 
Date:   Sat Oct 10 03:18:00 2015 +0530

Fix for adding gateway with IP outside subnet

Currently 'force_gateway_on_subnet' configuration is set to True
by default and enforces the subnet on to the gateway. With this
fix 'force_gateway_on_subnet' can be changed to False, and
gateway outside the subnet can be added.
Before adding the default route, a route to the gateway IP is
added. This applies to both external and internal networks.

This configuration option is deprecated, and should be removed
in a future release. It should always allow gateway outside the
subnet. This is done as a separate patch
https://review.openstack.org/#/c/277303/

Change-Id: I3a942cf98d263681802729cf09527f06c80fab2b
Closes-Bug: #1335023
Closes-Bug: #1398768


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1335023

Title:
  Neutron fails to create external network gateway when gateway's IP in
  different subnet with br-ex

Status in neutron:
  Fix Released

Bug description:
  Hi guys,

  I encountered a problem with neutron when trying to create external
  network with gateway in different subnet:

  neutron subnet-create ext-net --name ext-subnet \
    --allocation-pool start=46.105.252.216,end=46.105.252.219\
    --disable-dhcp --gateway 176.31.105.254 46.105.252.0/24

  The external network has gateway in different subnet: 46.105.252.216/24 and 
176.31.105.254
  I need something like this due to the router configuration in DC.

  The problem is neutron show no error, and on dashboard, the ext-net
  also shows its gateway 176.31.105.254. However, packets are not routed
  because in IP routing table of the router, no default gateway entry is
  added:

  sudo ip netns exec qrouter-f918cbb7-dc0c-4713-a6f5-3c66b46e12cf route
  -n

  Destination Gateway Genmask Flags Metric Ref
  Use Iface

  46.105.252.00.0.0.0 255.255.255.0   U 0  00 
qg-0103d6fa-31
  192.168.100.0   0.0.0.0 255.255.255.0   U 0  00 
qr-343ab2cb-f5

  I can work around by manually adding two line in routing table:

  Destination Gateway Genmask Flags Metric RefUse Iface
  0.0.0.0 176.31.105.254  0.0.0.0 UG0  00 
qg-0103d6fa-31
  46.105.252.00.0.0.0 255.255.255.0   U 0  00 
qg-0103d6fa-31
  176.31.105.254  0.0.0.0 255.255.255.255 UH0  00 
qg-0103d6fa-31
  192.168.100.0   0.0.0.0 255.255.255.0   U 0  00 
qr-343ab2cb-f5

  Then it worked fine!
  I believe this is a bug, due to adding gateway with different subnet in 
routing table will be rejected. In this case, we need to add this line first 
before adding gateway:

  176.31.105.254  0.0.0.0 255.255.255.255 UH0  0
  0 qg-0103d6fa-31

  So either we need to show users an error "not allow to add gateway in
  different subnet", or we should support adding gateway properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1335023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523459] Re: Instance can be booted into the "internal" availability zone

2016-02-22 Thread Sean Dague
This is very low priority bug.

You can only see the view above if you are an admin, so 'internal' is
only a key word in those cases.

The use can only use this to make their own host fail to boot. While it
might be nicer to validate their AZ up front, I'm not entirely convinced
this is any different than them trying to push to the 'bogus' az.

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Status: Opinion => Confirmed

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
   Importance: Low => Wishlist

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523459

Title:
  Instance can be booted into the "internal" availability zone

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Currently, only the nova-compute service has its own availability zone. 
Services such as nova-scheduler, nova-network, and nova-conductor  appear in 
the AZ named "internal". (ref: 
http://docs.openstack.org/openstack-ops/content/scaling.html) For example:
  $ nova availability-zone-list
  +---++
  | Name  | Status |
  +---++
  | internal  | available  |
  | |- node1  ||
  | | |- nova-conductor   | enabled :-) 2015-12-07T11:38:09.00 |
  | | |- nova-consoleauth | enabled :-) 2015-12-07T11:38:05.00 |
  | | |- nova-scheduler   | enabled :-) 2015-12-07T11:38:12.00 |
  | | |- nova-cert| enabled :-) 2015-12-07T11:38:07.00 |
  | nova  | available  |
  | |- node2  ||
  | | |- nova-compute | enabled :-) 2015-12-07T11:38:12.00 |
  | |- node3  ||
  | | |- nova-compute | enabled :-) 2015-12-07T11:38:12.00 |
  +---++

  
  However, we can schedule an instance to the "internal" AZ using following 
command:
  $ nova boot --flavor 42 --image  --availability-zone "internal" test
  Succeed with no error message!

  But this "test" instance will be in ERROR status because there is no compute 
node in the "internal" AZ.
  $ nova list
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | eca73033-15cf-402a-b39a-a91e497e3e07 | test | ERROR  | -  | NOSTATE 
|  |
  
+--+--+++-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1523459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479578] Re: Domain-specific config breaks some ops

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/282080
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c73a81e61d059f4ab9e9cc29bad5528e1459444b
Submitter: Jenkins
Branch:master

commit c73a81e61d059f4ab9e9cc29bad5528e1459444b
Author: Matthew Edmonds 
Date:   Thu Feb 18 17:02:09 2016 -0500

Allow user list without specifying domain

With a single domain environment, users can be listed without
specifying a domain. When moving to a multiple domain environment,
this remains true for domain-scoped tokens but not for project-scoped
tokens. Project-scoped tokens currently only work if the domain_id
query parameter is specified. This has been a source of pain to many
users, and is unnecessary. Just as the desired domain is assumed to be
that to which the token is scoped when the token is domain-scoped,
keystone can assume the desired domain is that of the project's domain
when the token is project-scoped.

Change-Id: I1d06935c06661109a523c5b4547ff01f23235a89
Closes-Bug: 1479578


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1479578

Title:
  Domain-specific config breaks some ops

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  I set up a multi-domain config on my devstack and tried to do a
  simple:

  $ openstack user list

  with project-scoped environment variables (in fact I downloaded the
  admin-openrc.sh from Horizon).

  This results in:

  ERROR: openstack The request you have made requires authentication.
  (Disable debug mode to suppress these details.) (HTTP 401) (Request-
  ID: req-b687e823-9896-4905-83d3-b1e45fa966ed)

  If I disable domain-specific configs in the keystone.conf, it works
  again.

  If it *is* enabled, I can force a domain-specific request using
  something like:

  $ OS_TENANT_ID= OS_TENANT_NAME= OS_PROJECT_NAME= openstack --os-
  domain-name  user list

  However if I specify the default domain then I get this:

  ERROR: openstack User 0fa9633d884a42448bbd386778ca6b87 has no access
  to domain default (Disable debug mode to suppress these details.)
  (HTTP 401) (Request-ID: req-65e053e4-33c2-4b7b-aedf-30d3ef88735c)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1479578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547705] Re: There is no way to use the default subnet pool without first looking it up

2016-02-22 Thread Carl Baldwin
neutron:  https://review.openstack.org/#/c/282021/
python-neutronclient:  https://review.openstack.org/#/c/282583/

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: python-neutronclient
 Assignee: (unassigned) => Carl Baldwin (carl-baldwin)

** Changed in: python-neutronclient
   Importance: Undecided => High

** Changed in: python-neutronclient
   Status: New => In Progress

** Changed in: python-neutronclient
Milestone: None => 4.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1547705

Title:
  There is no way to use the default subnet pool without first looking
  it up

Status in neutron:
  In Progress
Status in python-neutronclient:
  In Progress

Bug description:
  With the recent resolution of [1], which removed the automatic
  fallback to the default subnet pool, the only way to use the default
  subnetpool is to manually look it up and specify it on the command
  land.  This made things much less convenient for the end user.

  While discussing [1], I agreed to provide a new extension to make this
  convenient again.  The extension should be added to the server side to
  allow any API consumers to make use of it.

  [1] https://bugs.launchpad.net/neutron/+bug/1545199

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1547705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546189] Re: Add driver details in architecture doc

2016-02-22 Thread Steve Martinelli
We have docs about it here:
http://docs.openstack.org/developer/keystone/developing_drivers.html

This bug is so vague, I have no idea what was to be added.

** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1546189

Title:
  Add driver details in architecture doc

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  This bug tracks fixing the issue referred in below comment in 
https://review.openstack.org/#/c/209524
  
  ...
  Lance Bragstad
  Sep 3 12:04 AM
  ↩
  Patch Set 21:
  (1 comment)
  keystone/resource/core.py
  Line 1367:
  Does this one need to be added to the architecture doc?
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1546189/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547544] [NEW] heat: MessagingTimeout: Timed out waiting for a reply to message ID

2016-02-22 Thread Prashant Shetty
Public bug reported:

Setup:

Single controller[48 GB RAM, 16vCPU, 120GB Disk]
3 Network Nodes
100 ESX hypervisors distributed in 10 nova-compute nodes

Test:

1. Create /16 network
2. Heat template which which will launch 100 instances on network created step 1
3. Create 10 stack back2back so that we reach 1000 instances without waiting 
for previous stack to complete

Observation:

stack creations are failing while nova run_periodic_tasks at different
places like _heal_instance_info_cache,  _sync_scheduler_instance_info,
_update_available_resource etc

Have attached sample heat template, heat logs, nova compute log from one
of the host.


Logs:

2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2016-02-19 04:21:54.691 TRACE nova.compute.manager return f(*args, **kwargs)
2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 553, in 
_update_available_resource
2016-02-19 04:21:54.691 TRACE nova.compute.manager context, self.host, 
self.nodename)
2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
174, in wrapper
2016-02-19 04:21:54.691 TRACE nova.compute.manager args, kwargs)
2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 240, in 
object_class_action_versions
2016-02-19 04:21:54.691 TRACE nova.compute.manager args=args, kwargs=kwargs)
2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
2016-02-19 04:21:54.691 TRACE nova.compute.manager retry=self.retry)
2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
2016-02-19 04:21:54.691 TRACE nova.compute.manager timeout=timeout, 
retry=retry)
2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 465, in send
2016-02-19 04:21:54.691 TRACE nova.compute.manager retry=retry)
2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 454, in _send
2016-02-19 04:21:54.691 TRACE nova.compute.manager result = 
self._waiter.wait(msg_id, timeout)
2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 337, in wait
2016-02-19 04:21:54.691 TRACE nova.compute.manager message = 
self.waiters.get(msg_id, timeout=timeout)
2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 239, in get
2016-02-19 04:21:54.691 TRACE nova.compute.manager 'to message ID %s' % 
msg_id)
2016-02-19 04:21:54.691 TRACE nova.compute.manager MessagingTimeout: Timed out 
waiting for a reply to message ID a87a7f358a0948efa3ab5beb0c8f45e7
--


stack@esx-compute-9:/opt/stack/nova$ git log -1
commit d51c5670d8d26e989d92eb29658eed8113034c0f
Merge: 4fade90 30d5d80
Author: Jenkins 
Date:   Thu Feb 18 17:56:32 2016 +

Merge "reset task_state after select_destinations failed."
stack@esx-compute-9:/opt/stack/nova$

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1547544

Title:
  heat: MessagingTimeout: Timed out waiting for a reply to message ID

Status in OpenStack Compute (nova):
  New

Bug description:
  Setup:

  Single controller[48 GB RAM, 16vCPU, 120GB Disk]
  3 Network Nodes
  100 ESX hypervisors distributed in 10 nova-compute nodes

  Test:

  1. Create /16 network
  2. Heat template which which will launch 100 instances on network created 
step 1
  3. Create 10 stack back2back so that we reach 1000 instances without waiting 
for previous stack to complete

  Observation:

  stack creations are failing while nova run_periodic_tasks at different
  places like _heal_instance_info_cache,  _sync_scheduler_instance_info,
  _update_available_resource etc

  Have attached sample heat template, heat logs, nova compute log from
  one of the host.

  
  Logs:

  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2016-02-19 04:21:54.691 TRACE nova.compute.manager return f(*args, 
**kwargs)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 553, in 
_update_available_resource
  2016-02-19 04:21:54.691 TRACE 

[Yahoo-eng-team] [Bug 1547564] [NEW] nova baremetal-node-list unexpected API error 500

2016-02-22 Thread mark lewis
Public bug reported:

$ nova baremetal-node-list

ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
 (HTTP 500) (Request-ID: 
req-78138575-cdcc-408e-a232-dc717bfe96d9)

Nova API log.
2016-02-19 15:38:04.507 DEBUG nova.osapi_compute.wsgi.server 
[req-48a19e8f-00dd-4fdf-a079-5cd17a030f7e None None] (31654) accepted 
('10.0.0.150', 37341) from (pid=31654) server 
/usr/lib/python2.7/site-packages/eventlet/wsgi.py:867
2016-02-19 15:38:04.508 DEBUG keystoneauth.session [-] REQ: curl -g -i --cacert 
"/opt/stack/data/ca-bundle.pem" -X GET http://10.0.0.150:35357/v3/auth/tokens 
-H "X-Subject-Token: {SHA1}2548654be6a35b05a064f0bd15e4da50119ebeda" -H 
"User-Agent: python-keystoneclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}05b09de98dd6820f5766b21e415cbc27190125e6" from (pid=31654) 
_http_log_request /usr/lib/python2.7/site-packages/keystoneauth1/session.py:248
2016-02-19 15:38:04.524 DEBUG keystoneauth.session [-] RESP: [200] 
Content-Length: 4227 X-Subject-Token: 
{SHA1}2548654be6a35b05a064f0bd15e4da50119ebeda Vary: X-Auth-Token Keep-Alive: 
timeout=5, max=100 Server: Apache/2.4.6 (CentOS) OpenSSL/1.0.1e-fips 
mod_wsgi/3.4 Python/2.7.5 Connection: Keep-Alive Date: Fri, 19 Feb 2016 
15:38:04 GMT Content-Type: application/json x-openstack-request-id: 
req-4dd14b09-bc0e-4dfb-ae6c-76d77cbc0bd4
RESP BODY: {"token": {"methods": ["password", "token"], "roles": [{"id": 
"16109acaf0c640e4b96abeb2eca388e1", "name": "admin"}], "expires_at": 
"2016-02-19T16:38:04.00Z", "project": {"domain": {"id": "default", "name": 
"Default"}, "id": "0b130fa37b9a44a696b64559e713e032", "name": "admin"}, 
"catalog": "", "user": {"domain": {"id": "default", "name": 
"Default"}, "id": "4e6697fd84604775b097bb53bd290367", "name": "admin"}, 
"audit_ids": ["kEE5cM-hRHyH50QppYnuOw"], "issued_at": 
"2016-02-19T15:38:04.499020"}}
 from (pid=31654) _http_log_response 
/usr/lib/python2.7/site-packages/keystoneauth1/session.py:277
2016-02-19 15:38:04.525 INFO nova.osapi_compute.wsgi.server 
[req-82377dd1-15fb-439c-959b-c1afec33091b admin admin] 10.0.0.150 "GET 
/v2.1/0b130fa37b9a44a696b64559e713e032 HTTP/1.1" status: 404 len: 264 time: 
0.0177109
2016-02-19 15:38:04.529 DEBUG nova.api.openstack.wsgi 
[req-24e38a23-d0ba-46aa-84ba-f6dfc27961e9 admin admin] Calling method '>' from (pid=31654) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:699
2016-02-19 15:38:04.530 INFO nova.osapi_compute.wsgi.server 
[req-24e38a23-d0ba-46aa-84ba-f6dfc27961e9 admin admin] 10.0.0.150 "GET /v2.1/ 
HTTP/1.1" status: 200 len: 652 time: 0.0027330
2016-02-19 15:38:04.600 DEBUG nova.api.openstack.wsgi 
[req-78138575-cdcc-408e-a232-dc717bfe96d9 admin admin] Calling method '>' from (pid=31654) _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:699
2016-02-19 15:38:04.601 ERROR nova.api.openstack.extensions 
[req-78138575-cdcc-408e-a232-dc717bfe96d9 admin admin] Unexpected exception in 
API method
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions return f(*args, 
**kwargs)
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 105, in 
index
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions icli = 
_get_ironic_client()
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 76, in 
_get_ironic_client
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions icli = 
ironic_client.get_client(CONF.ironic.api_version, **kwargs)
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/ironicclient/client.py", line 106, in 
get_client
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions raise 
exc.AmbiguousAuthSystem(e)
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions 
AmbiguousAuthSystem: Must provide Keystone credentials or user-defined endpoint 
and token
2016-02-19 15:38:04.601 TRACE nova.api.openstack.extensions
2016-02-19 15:38:04.601 INFO nova.api.openstack.wsgi 
[req-78138575-cdcc-408e-a232-dc717bfe96d9 admin admin] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.

2016-02-19 15:38:04.601 DEBUG nova.api.openstack.wsgi 
[req-78138575-cdcc-408e-a232-dc717bfe96d9 admin admin] Returning 500 to user: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
 from (pid=31654) __call__ 
/opt/stack/nova/nova/api/openstack/wsgi.py:1070
2016-02-19 15:38:04.602 INFO nova.osapi_compute.wsgi.server 

[Yahoo-eng-team] [Bug 1506076] Re: Allow connection tracking to be disabled per-port

2016-02-22 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506076

Title:
  Allow connection tracking to be disabled per-port

Status in neutron:
  Expired

Bug description:
  This RFE is being raised in the context of this use case
  https://review.openstack.org/#/c/176301/ from the TelcoWG.

  OpenStack implements levels of per-VM security protection (security
  groups, anti-spoofing rules).  If you want to deploy a trusted VM
  which itself is providing network security functions, as with the
  above use case, then it is often necessary to disable some of the
  native OpenStack protection so as not to interfere with the protection
  offered by the VM or use excessive host resources.

  Neutron already allows you to disable security groups on a per-port
  basis.  However, the Linux kernel will still perform connection
  tracking on those ports.  With default Linux config, VMs will be
  severely scale limited without specific host configuration of
  connection tracking limits - for example, a Session Border Controller
  VM may be capable of handling millions of concurrent TCP connections,
  but a default host won't support anything like that.  This bug is
  therefore a RFE to request that disabling security group function for
  a port further disables kernel connection tracking for IP addresses
  associated with that port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512207] Re: Fix usage of assertions

2016-02-22 Thread Davanum Srinivas (DIMS)
** No longer affects: oslo.utils

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1512207

Title:
  Fix usage of assertions

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Blazar:
  In Progress
Status in Cinder:
  Invalid
Status in congress:
  Fix Released
Status in Cue:
  Fix Released
Status in Glance:
  Won't Fix
Status in Group Based Policy:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in Ironic Inspector:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in kuryr:
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Health:
  In Progress
Status in os-brick:
  Fix Released
Status in os-net-config:
  In Progress
Status in os-testr:
  In Progress
Status in oslo.cache:
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  Fix Released
Status in Rally:
  Fix Released
Status in refstack:
  In Progress
Status in requests-mock:
  In Progress
Status in Sahara:
  Fix Released
Status in shaker:
  Fix Released
Status in Solum:
  Fix Released
Status in Stackalytics:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  In Progress
Status in Trove:
  Fix Released
Status in Vitrage:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  Manila  should use the specific assertion:

self.assertIsTrue/False(observed)

  instead of the generic assertion:

self.assertEqual(True/False, observed)

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1512207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547197] Re: "Additional properties are not allowed" message not getting translated

2016-02-22 Thread Sean Dague
Given Matt's description of this being a direct message from JSON Schema
it's something that's unlikely to be addressed in the Nova source.

** Changed in: nova
   Status: Incomplete => Won't Fix

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1547197

Title:
  "Additional properties are not allowed" message not getting translated

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  During a Nova v2.1 request, the message "Additional properties are not
  allowed" is not being translated.  Other parts of the API request are
  being translated, but the error message seems to be hard-coded in
  English.

  Example:

  URL:
  
https://9.47.82.183:8774/v2.1/dbcb06068c6e4a3fb59326a0bce653f0/os-hosts/ip9_114_181_60

  Body:
  {
  "registration": {
  "host_display_name": "MyKVMHost_updated1",
  "access_ip": "912.123.233.44",
  "user_id": "root",
  "password": "Passw0rd"
  }
  }

  Response:
  400

  {"badRequest": {"message": "Additional properties are not allowed
  (u'registration' was unexpected)", "code": 400}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1547197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545814] Re: Tablecontroller should use ctrl instead of scope

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280365
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b01ceda1edbb536867db0d55ad95e3666a1b5b4b
Submitter: Jenkins
Branch:master

commit b01ceda1edbb536867db0d55ad95e3666a1b5b4b
Author: Thai Tran 
Date:   Thu Feb 18 11:36:15 2016 -0800

Tablecontroller should use ctrl instead of scope

TableController currently uses $scope for selected and numSelected.
It should use ctrl as suggested by JP's guide.

Change-Id: Ie0bdd85343206008ba80fb6a696c78bfc7fe09bb
Closes-Bug: #1545814


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1545814

Title:
  Tablecontroller should use ctrl instead of scope

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  horizon.framework.widgets.table.controller:TableController currently
  uses $scope for selected and numSelected. It should use ctrl as
  suggested by JP's guide.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1545814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 967832] Re: Resources owned by a project/tenant are not cleaned up after that project is deleted from keystone

2016-02-22 Thread Sean Dague
The Tokyo Summit solution here was that this should be done via an osc
plugin. There are really dramatic issues with auto delete from keystone
deletes. Many sites need an archive process. Nova itself soft deletes
many resources, and even has the ability to set an undo time on some of
them.

This shouldn't be an automatic process in the cloud, it should be
deliberate. Just like not deleting all the files on your system owned by
a user if you remove that user from /etc/passwd.

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/967832

Title:
  Resources owned by a project/tenant are not cleaned up after that
  project is deleted from keystone

Status in Glance:
  Confirmed
Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  If you have running instances in a tenant, then remove all the users,
  and finally delete the tenant, the instances are still running.
  Causing serious trouble, since nobody has access to delete them. Also
  affects the "instances" page in horizon. It will break if this
  scenario occurs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/967832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398629] Re: Fix buggy tests that use REQUIRES_LOCKING=True

2016-02-22 Thread Sean Dague
A bug for this isn't really helping get this work done. This is just one
of those non bug test cleanup efforts.

** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398629

Title:
  Fix buggy tests that use REQUIRES_LOCKING=True

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  from https://github.com/openstack/nova/blob/master/nova/test.py#L311

  # NOTE(sdague): because of the way we were using the lock
  # wrapper we eneded up with a lot of tests that started
  # relying on global external locking being set up for them. We
  # consider all of these to be *bugs*. Tests should not require
  # global external locking, or if they do, they should
  # explicitly set it up themselves.
  #
  # The following REQUIRES_LOCKING class parameter is provided
  # as a bridge to get us there. No new tests should be added
  # that require it, and existing classes and tests should be
  # fixed to not need it.

  We need to fix the tests that use REQUIRES_LOCKING = True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1398629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456196] Re: Delete flavor should raise error, if it is inuse (any existing vms booted with this flavor)

2016-02-22 Thread Sean Dague
** Changed in: nova
   Status: In Progress => Confirmed

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456196

Title:
  Delete flavor should raise error, if it is inuse (any existing vms
  booted with this flavor)

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Version : Juno Devstack

  Steps to reproduce.

  Step1 : Create an instance with m1.tiny flavor.

  stack@onecloud-Standard-PC-i440FX-PIIX-1996:~/nova/nova/compute$ nova
  boot --image cirros-0.3.4-x86_64-uec --flavor m1.tiny vm1

  
+--++
  | Property | Value
  |
  
+--++
  | OS-DCF:diskConfig| MANUAL   
  |
  | OS-EXT-AZ:availability_zone  | nova 
  |
  | OS-EXT-SRV-ATTR:host | -
  |
  | OS-EXT-SRV-ATTR:hypervisor_hostname  | -
  |
  | OS-EXT-SRV-ATTR:instance_name| instance-0006
  |
  | OS-EXT-STS:power_state   | 0
  |
  | OS-EXT-STS:task_state| scheduling   
  |
  | OS-EXT-STS:vm_state  | building 
  |
  | OS-SRV-USG:launched_at   | -
  |
  | OS-SRV-USG:terminated_at | -
  |
  | accessIPv4   |  
  |
  | accessIPv6   |  
  |
  | adminPass| Uaypcj6qKzbr 
  |
  | config_drive |  
  |
  | created  | 2015-05-18T12:38:25Z 
  |
  | flavor   | m1.tiny (6)  
  |
  | hostId   |  
  |
  | id   | 7b4fdada-6900-4836-9de7-3bda0f13dabf 
  |
  | image| cirros-0.3.4-x86_64-uec 
(a49af497-e336-4c5a-8508-6dabb70fe261) |
  | key_name | -
  |
  | metadata | {}   
  |
  | name | vm1  
  |
  | os-extended-volumes:volumes_attached | []   
  |
  | progress | 0
  |
  | security_groups  | default  
  |
  | status   | BUILD
  |
  | tenant_id| d5a7933dfa98430abf7fcc37ff2661b1 
  |
  | updated  | 2015-05-18T12:38:26Z 
  |
  | user_id  | a20aaf87a4344985ae17e378065858ed 
  |
  
+--++

  Before deleting flavor please note the above output.

  Step 2: Once instance gets active delete the m1.tiny flavor.

  stack@onecloud-Standard-PC-i440FX-PIIX-1996:~/nova/nova/compute$ nova 
flavor-delete m1.tiny
  
++--+---+--+---+--+---+-+---+
  | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public |
  
++--+---+--+---+--+---+-+---+
  | 6  | m1.tiny  | 512   | 1| 0 |  | 1 | 1.0 | 
True  |
  
++--+---+--+---+--+---+-+---+

  Step 3 : Nova show 

[Yahoo-eng-team] [Bug 1548198] [NEW] Misuse of assertTrue in console and virt unittests

2016-02-22 Thread Takashi NATSUME
Public bug reported:

AssertEqual should be used instead of assertTrue in the following
unitetests.

test_get_format_fs method of class VirtDiskVFSGuestFSTestin 
nova/tests/unit/console/test_serial.py
test_acquire_port method of class VirtDiskVFSGuestFSTest in 
nova/tests/unit/virt/disk/vfs/test_guestfs.py
test_get_os method of class LibvirtOsInfoTest in 
nova/tests/unit/virt/test_osinfo.py

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1548198

Title:
  Misuse of assertTrue in console and virt unittests

Status in OpenStack Compute (nova):
  New

Bug description:
  AssertEqual should be used instead of assertTrue in the following
  unitetests.

  test_get_format_fs method of class VirtDiskVFSGuestFSTestin 
nova/tests/unit/console/test_serial.py
  test_acquire_port method of class VirtDiskVFSGuestFSTest in 
nova/tests/unit/virt/disk/vfs/test_guestfs.py
  test_get_os method of class LibvirtOsInfoTest in 
nova/tests/unit/virt/test_osinfo.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1548198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311778] Re: Unit tests fail with MessagingTimeout errors

2016-02-22 Thread Sean Dague
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1311778

Title:
  Unit tests fail with MessagingTimeout errors

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  There is an issue that is causing unit tests to fail with the
  following error:

  MessagingTimeout: No reply on topic conductor
  MessagingTimeout: No reply on topic scheduler

  2014-04-23 13:45:52.017 | Traceback (most recent call last):
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 133, in _dispatch_and_reply
  2014-04-23 13:45:52.017 | incoming.message))
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 176, in _dispatch
  2014-04-23 13:45:52.017 | return self._do_dispatch(endpoint, method, 
ctxt, args)
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 122, in _do_dispatch
  2014-04-23 13:45:52.017 | result = getattr(endpoint, method)(ctxt, 
**new_args)
  2014-04-23 13:45:52.018 |   File "nova/conductor/manager.py", line 798, in 
build_instances
  2014-04-23 13:45:52.018 | legacy_bdm_in_spec=legacy_bdm)
  2014-04-23 13:51:50.628 |   File "nlibvir:  error : internal error could not 
initialize domain event timer
  2014-04-23 13:54:57.953 | ova/scheduler/rpcapi.py", line 120, in run_instance
  2014-04-23 13:54:57.953 | cctxt.cast(ctxt, 'run_instance', **msg_kwargs)
  2014-04-23 13:54:57.953 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/client.py",
 line 150, in call
  2014-04-23 13:54:57.953 | wait_for_reply=True, timeout=timeout)
  2014-04-23 13:54:57.953 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/transport.py",
 line 90, in _send
  2014-04-23 13:54:57.953 | timeout=timeout)
  2014-04-23 13:54:57.954 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 166, in send
  2014-04-23 13:54:57.954 | return self._send(target, ctxt, message, 
wait_for_reply, timeout)
  2014-04-23 13:54:57.954 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 161, in _send
  2014-04-23 13:54:57.954 | 'No reply on topic %s' % target.topic)
  2014-04-23 13:54:57.954 | MessagingTimeout: No reply on topic scheduler

  

  2014-04-23 13:45:52.008 | Traceback (most recent call last):
  2014-04-23 13:45:52.008 |   File "nova/api/openstack/__init__.py", line 125, 
in __call__
  2014-04-23 13:45:52.008 | return req.get_response(self.application)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py",
 line 1320, in send
  2014-04-23 13:45:52.009 | application, catch_exc_info=False)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py",
 line 1284, in call_application
  2014-04-23 13:45:52.009 | app_iter = application(self.environ, 
start_response)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.009 | return resp(environ, start_response)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/routes/middleware.py",
 line 131, in __call__
  2014-04-23 13:45:52.010 | response = self.app(environ, start_response)
  2014-04-23 13:45:52.011 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 

[Yahoo-eng-team] [Bug 1353939] Re: Rescue fails with 'Failed to terminate process: Device or resource busy' in the n-cpu log

2016-02-22 Thread Sean Dague
** Changed in: nova/juno
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353939

Title:
  Rescue fails with 'Failed to terminate process: Device or resource
  busy' in the n-cpu log

Status in Ubuntu Cloud Archive:
  Invalid
Status in Ubuntu Cloud Archive juno series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Won't Fix
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in nova package in Ubuntu:
  Invalid

Bug description:
  [Impact]

   * Users may sometimes fail to shutdown an instance if the associated qemu
 process is on uninterruptable sleep (typically IO).

  [Test Case]

   * 1. create some IO load in a VM
 2. look at the associated qemu, make sure it has STAT D in ps output
 3. shutdown the instance
 4. with the patch in place, nova will retry calling libvirt to shutdown
the instance 3 times to wait for the signal to be delivered to the 
qemu process.

  [Regression Potential]

   * None


  message: "Failed to terminate process" AND
  message:'InstanceNotRescuable' AND message: 'Exception during message
  handling' AND tags:"screen-n-cpu.txt"

  The above log stash-query reports back only the failed jobs, the 'Failed to 
terminate process' close other failed rescue tests,
  but tempest does not always reports them as an error at the end.

  message: "Failed to terminate process" AND tags:"screen-n-cpu.txt"

  Usual console log:
  Details: (ServerRescueTestJSON:test_rescue_unrescue_instance) Server 
0573094d-53da-40a5-948a-747d181462f5 failed to reach RESCUE status and task 
state "None" within the required time (196 s). Current status: SHUTOFF. Current 
task state: None.

  http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-
  full/90726cb/console.html#_2014-08-07_03_50_26_520

  Usual n-cpu exception:
  
http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-full/90726cb/logs/screen-n-cpu.txt.gz#_2014-08-07_03_32_02_855

  2014-08-07 03:32:02.855 ERROR oslo.messaging.rpc.dispatcher 
[req-39ce7a3d-5ceb-41f5-8f9f-face7e608bd1 ServerRescueTestJSON-2035684545 
ServerRescueTestJSON-1017508309] Exception during message handling: Instance 
0573094d-53da-40a5-948a-747d181462f5 cannot be rescued: Driver Error: Failed to 
terminate process 26425 with SIGKILL: Device or resource busy
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 408, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 88, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 71, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 292, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher pass
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 

[Yahoo-eng-team] [Bug 1306727] Re: versions controller requests with a body log ERRORs

2016-02-22 Thread Sean Dague
** Changed in: nova/icehouse
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306727

Title:
  versions controller requests with a body log ERRORs

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  Using Nova trunk (Juno). I'm seeing the following nova-api.log errors
  when unauthenticated /versions controller POST requests are made with
  a request body:

  -

  Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 
2014-04-11 07:04:06.235 27044 ERROR nova.api.openstack.wsgi [-] Exception 
handling resource: index() got an unexpected keyword argument 'body'
  Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 
2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi Traceback (most 
recent call last):
  Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 
2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", 
line 983, in _process_stack
  Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 
2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi action_result = 
self.dispatch(meth, request, action_args)
  Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 
2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/api/openstack/wsgi.py", 
line 1070, in dispatch
  Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 
2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
  Apr 11 07:04:06 overcloud-controller0-n2g3h54d6w6u nova-api[27022]: 
2014-04-11 07:04:06.235 27044 TRACE nova.api.openstack.wsgi TypeError: index() 
got an unexpected keyword argument 'body'

  -

  Both the index() and multi() actions in the versions controller are
  susceptible to this behavior. Ideally we wouldn't be logging stack
  traces when this happens.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1306727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2016-02-22 Thread Davanum Srinivas (DIMS)
** No longer affects: oslo.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Gnocchi:
  Invalid
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.cache:
  Invalid
Status in oslo.concurrency:
  Invalid
Status in oslo.service:
  Fix Committed
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  Fix Committed
Status in python-neutronclient:
  Fix Released
Status in Python client library for Sahara:
  Fix Committed
Status in python-solumclient:
  Fix Released
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Committed
Status in Solum:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in Trove:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544383] Re: Add the ability to load a set of service plugins on startup

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/282586
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=76c446bb5d1173798f13f5859ec52d23cb8dbedd
Submitter: Jenkins
Branch:master

commit 76c446bb5d1173798f13f5859ec52d23cb8dbedd
Author: Armando Migliaccio 
Date:   Fri Feb 19 17:04:00 2016 -0800

Document the ability to load service plugins at startup

Change-Id: I1368f3505b68ea20e2585e23d10d90fcd2bac1f6
Closes-bug: #1544383


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544383

Title:
  Add the ability to load a set of service plugins on startup

Status in neutron:
  Fix Released
Status in openstack-manuals:
  Confirmed

Bug description:
  https://review.openstack.org/273439
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit aadf2f30f84dff3d85f380a7ff4e16dbbb0c6bb0
  Author: armando-migliaccio 
  Date:   Thu Jan 28 01:39:00 2016 -0800

  Add the ability to load a set of service plugins on startup
  
  Service plugins are a great way of adding functionality in a
  cohesive way. Some plugins (e.g. network ip availability or
  auto_allocate) extend the capabilities of the Neutron server
  by being completely orthogonal to the core plugin, and yet may
  be considered an integral part of functionality available in
  any Neutron deployment. For this reason, it makes sense to
  include them seamlessly in the service plugin loading process.
  
  This patch, in particular, introduces the 'auto_allocate' service
  plugin for default loading, as we'd want this feature to be enabled
  for Nova to use irrespective of the chosen underlying core plugin.
  
  The feature requires subnetpools, external_net and router, while
  the first is part of the core, the others can be plugin specific
  and they must be explicitly advertised. That said, they all are
  features that any deployment can hardly live without.
  
  DocImpact: The "get-me-a-network" feature simplifies the process
  for launching an instance with basic network connectivity (via an
  externally connected private tenant network).
  
  Once leveraged by Nova, a tenant/admin is no longer required to
  provision networking resources ahead of boot process in order to
  successfully launch an instance.
  
  Implements: blueprint get-me-a-network
  
  Change-Id: Ia35e8a946bf0ac0bb085cde46b675d17b0bb2f51

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369518] Re: Server Group Anti/Affinity functionality doesn't work with cells

2016-02-22 Thread Sean Dague
Cells v1 is in freeze. Only regressions will be addressed.

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369518

Title:
  Server Group Anti/Affinity functionality doesn't work with cells

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Server Groups doesn't work with cells.
  Tested in Icehouse.

  Using the API the "server group" is created in the top cell and not 
propagated to children cells.
  At this point booting a VM fails because schedulers in children cells are not 
aware of the server group.

  Creating the entries "manually" in the children cells databases avoid the 
instance scheduling to fail,
  however the anti/affinity policy is not correct.
  Server group "members" are only updated in the TOP cell.  Schedulers at 
children cells are
  not aware of members in the group (empty table in children) so anti/affinity 
is not respected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539960] Re: Style: Material Design: Brand SVG Should Inherit color from theme

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/274371
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4332961bd587820b1cefccdcae87ad9ccf49079b
Submitter: Jenkins
Branch:master

commit 4332961bd587820b1cefccdcae87ad9ccf49079b
Author: Bryan Jen 
Date:   Sat Jan 30 11:47:59 2016 -0700

Style: Material: Fixes colors for navbar and menu

Elements on the navbar (Hamburger menu, Brand, and Responsive
Overflow Menu) need to correctly inherit colors from the theme.

Also, Bootswatch's 'paper' sets the color of the
.dropdown-header to be a light gray regardless of whether its an
inverse navbar. This is not ideal. It should inherit the same color.

Change-Id: I29ebfe82209d16a785b7171cc1662768c7c3191c
Co-Authored-By: Diana Whitten 
Closes-bug: #1539951
Closes-bug: #1539952
Closes-bug: #1539960
Closes-bug: #1540745


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1539960

Title:
  Style: Material Design: Brand SVG Should Inherit color from theme

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The Brand SVG in the material theme should inherit its color from the
  theme itself (navbar-link-color or something similar), so that we can
  easily update it in the future.

  See: https://i.imgur.com/VA1PIOu.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1539960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548017] [NEW] Unable to get bootable volume list at launch instance (launch instance NG)

2016-02-22 Thread Vincent Untz
Public bug reported:

This is really the same bug as
https://bugs.launchpad.net/horizon/+bug/1457028 except that it's for the
NG launch instance dialog.

** Affects: horizon
 Importance: Undecided
 Assignee: Vincent Untz (vuntz)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1548017

Title:
   Unable to get bootable volume list at launch instance (launch
  instance NG)

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  This is really the same bug as
  https://bugs.launchpad.net/horizon/+bug/1457028 except that it's for
  the NG launch instance dialog.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1548017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547544] Re: heat: MessagingTimeout: Timed out waiting for a reply to message ID

2016-02-22 Thread Sean Dague
I realistically expect that you have just overloaded the system so these
requests are taking too long. dstat info during the run would be useful
to figure that out.

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1547544

Title:
  heat: MessagingTimeout: Timed out waiting for a reply to message ID

Status in OpenStack Compute (nova):
  Incomplete
Status in oslo.messaging:
  New

Bug description:
  Setup:

  Single controller[48 GB RAM, 16vCPU, 120GB Disk]
  3 Network Nodes
  100 ESX hypervisors distributed in 10 nova-compute nodes

  Test:

  1. Create /16 network
  2. Heat template which which will launch 100 instances on network created 
step 1
  3. Create 10 stack back2back so that we reach 1000 instances without waiting 
for previous stack to complete

  Observation:

  stack creations are failing while nova run_periodic_tasks at different
  places like _heal_instance_info_cache,  _sync_scheduler_instance_info,
  _update_available_resource etc

  Have attached sample heat template, heat logs, nova compute log from
  one of the host.

  
  Logs:

  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  2016-02-19 04:21:54.691 TRACE nova.compute.manager return f(*args, 
**kwargs)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 553, in 
_update_available_resource
  2016-02-19 04:21:54.691 TRACE nova.compute.manager context, self.host, 
self.nodename)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
174, in wrapper
  2016-02-19 04:21:54.691 TRACE nova.compute.manager args, kwargs)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 240, in 
object_class_action_versions
  2016-02-19 04:21:54.691 TRACE nova.compute.manager args=args, 
kwargs=kwargs)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
158, in call
  2016-02-19 04:21:54.691 TRACE nova.compute.manager retry=self.retry)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
  2016-02-19 04:21:54.691 TRACE nova.compute.manager timeout=timeout, 
retry=retry)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 465, in send
  2016-02-19 04:21:54.691 TRACE nova.compute.manager retry=retry)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 454, in _send
  2016-02-19 04:21:54.691 TRACE nova.compute.manager result = 
self._waiter.wait(msg_id, timeout)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 337, in wait
  2016-02-19 04:21:54.691 TRACE nova.compute.manager message = 
self.waiters.get(msg_id, timeout=timeout)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 239, in get
  2016-02-19 04:21:54.691 TRACE nova.compute.manager 'to message ID %s' % 
msg_id)
  2016-02-19 04:21:54.691 TRACE nova.compute.manager MessagingTimeout: Timed 
out waiting for a reply to message ID a87a7f358a0948efa3ab5beb0c8f45e7
  --

  
  stack@esx-compute-9:/opt/stack/nova$ git log -1
  commit d51c5670d8d26e989d92eb29658eed8113034c0f
  Merge: 4fade90 30d5d80
  Author: Jenkins 
  Date:   Thu Feb 18 17:56:32 2016 +

  Merge "reset task_state after select_destinations failed."
  stack@esx-compute-9:/opt/stack/nova$

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1547544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525250] Re: Failure when federated user name contains non ascii characters

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/279908
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=e913001cbedb4dd8748023ede31115a032de83f8
Submitter: Jenkins
Branch:master

commit e913001cbedb4dd8748023ede31115a032de83f8
Author: Steve Martinelli 
Date:   Sat Feb 13 20:42:30 2016 -0500

handle unicode names for federated users

the previous logic that handled getting the assertions from
the environment did not account for utf8 characters

Co-Authored-By: David Stanek 
Closes-Bug: 1525250

Change-Id: I90f4885161a72758986a652e845b4017f9cdcfb7


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1525250

Title:
  Failure when federated user name contains non ascii characters

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When logging in with openid-connect, I get

   '{"error": {"message": "An unexpected error prevented the server from
  fulfilling your request: 'ascii' codec can't decode byte 0xc3 in
  position 5: ordinal not in range(128) (Disable debug mode to suppress
  these details.)", "code": 500, "title": "Internal Server Error"}}'

  My name has an 'å'. I suspect there is a connection.

  Coincidentally(?), if I do the following in  python shell:

  >>> unicode('Jon Kåre Hellan')

  I get 'UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in
  position 5: ordinal not in range(128)'

  This is on liberty, using federation in contrib. On master, federation
  has been moved up from contrib, but I couldn't see any code changes
  that would help.

  Stack trace:

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 248, 
in __call__
  result = method(context, **params)
File 
"/usr/lib/python2.7/site-packages/keystone/contrib/federation/controllers.py", 
line 315, in federated_sso_auth
  protocol_id)
File 
"/usr/lib/python2.7/site-packages/keystone/contrib/federation/controllers.py", 
line 297, in federated_authentication
  return self.authenticate_for_token(context, auth=auth)
File "/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 
385, in authenticate_for_token
  self.authenticate(context, auth_info, auth_context)
File "/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 
510, in authenticate
  auth_context)
File "/usr/lib/python2.7/site-packages/keystone/auth/plugins/mapped.py", 
line 69, in authenticate
  self.identity_api)
File "/usr/lib/python2.7/site-packages/keystone/auth/plugins/mapped.py", 
line 144, in handle_unscoped_token
  federation_api, identity_api)
File "/usr/lib/python2.7/site-packages/keystone/auth/plugins/mapped.py", 
line 188, in apply_mapping_filter
  identity_provider, protocol, assertion)
File 
"/usr/lib/python2.7/site-packages/keystone/contrib/federation/core.py", line 
90, in evaluate
  mapped_properties = rule_processor.process(assertion_data)
File 
"/usr/lib/python2.7/site-packages/keystone/contrib/federation/utils.py", line 
470, in process
  new_local = self._update_local_mapping(local, direct_maps)
File 
"/usr/lib/python2.7/site-packages/keystone/contrib/federation/utils.py", line 
611, in _update_local_mapping
  new_value = self._update_local_mapping(v, direct_maps)
File 
"/usr/lib/python2.7/site-packages/keystone/contrib/federation/utils.py", line 
613, in _update_local_mapping
  new_value = v.format(*direct_maps)
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 5: 
ordinal not in range(128)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1525250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412197] Re: cells assumes compute nodes are subdivisible

2016-02-22 Thread Sean Dague
Cells v1 is in freeze state. Only regressions will be addressed.

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1412197

Title:
  cells assumes compute nodes are subdivisible

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  In nova/cells/starte.py _update_our_capacity(), the calculations of
  free units of compute are based on optimistic packing of each instance
  type on each compute node. This does not account for hypervisors which
  do not allow more than one instance per node, such as ironic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1412197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546423] Re: delete volume action link is shown even if it shouldn't

2016-02-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/281298
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=49a1a6356d8239d5f463c4e09564171dcccaa662
Submitter: Jenkins
Branch:master

commit 49a1a6356d8239d5f463c4e09564171dcccaa662
Author: Masco Kaliyamoorthy 
Date:   Wed Feb 17 19:45:38 2016 +0530

Hide delete volume if it has snapshot

If the volume has a snapshot, it is not allowed to
delete it. In tables the delete action is hidden but
if we go to the volume detail page, the delete action
is available.

This patch hides the delete volume on detail page too.

Change-Id: I4d8690b035dedd7ebcacb3479d346cfb3fb324f1
Closes-Bug: #1546423


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1546423

Title:
  delete volume action link is shown even if it shouldn't

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  copying a end user bug report here:

  Description of problem:
  If volume has snaphost, it can't be deleted (need to remove them first). If 
you navigate to detail of the volume, there exists a link to Delete volume - 
and it shouldn't be there.

  
  Version-Release number of selected component (if applicable):
  liberty

  How reproducible:
  100%

  Steps to Reproduce:
  1. log in as demo user
  2. Project - Compute - Volumes
  3. create Volume with default values, name it "test"
  4. verify, that drop down menu of the row with "test" volume contain Delete 
volume
  click to Create snapshot, name it as "test_snap"
  navigate back to list of Volumes (not snapshots of them)
  verify, that drop down menu of the row with "test" volume does NOT contain 
Delete volume
  click to name of the volume to navigate to detail of volume "test"
  on the right top of page there is action button with dropdown menu. There is 
Delete volume item, that is not accessible from list of volumes

  
  Actual results:
  Delete volume item on detail page shouldn't be there, if there exists 
snapshot of it

  Expected results:
  Delete volume is visible always (or seems to be)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1546423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477261] Re: Juno Compute node unable to register hypervisor with Kilo Controller

2016-02-22 Thread Sean Dague
*** This bug is a duplicate of bug 1431201 ***
https://bugs.launchpad.net/bugs/1431201

** This bug has been marked a duplicate of bug 1431201
   kilo controller can't conduct juno compute nodes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477261

Title:
  Juno Compute node unable to register hypervisor  with Kilo Controller

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Controller Node:
  OS: openSUSE13.2
  python-nova-2015.1.1.dev62-1.1.noarch
  openstack-nova-conductor-2015.1.1.dev62-1.1.noarch
  openstack-nova-scheduler-2015.1.1.dev62-1.1.noarch
  openstack-nova-cert-2015.1.1.dev62-1.1.noarch
  python-novaclient-2.23.0-2.4.noarch
  openstack-nova-novncproxy-2015.1.1.dev62-1.1.noarch
  openstack-nova-api-2015.1.1.dev62-1.1.noarch
  openstack-nova-consoleauth-2015.1.1.dev62-1.1.noarch
  openstack-nova-2015.1.1.dev62-1.1.noarch

  Compute Node:
  OS: openSUSE13.1
  openstack-nova-compute-2014.2.4.dev56-1.1.noarch
  python-novaclient-2.20.0-2.3.noarch
  python-nova-2014.2.4.dev56-1.1.noarch
  openstack-nova-2014.2.4.dev56-1.1.noarch

  During the installation of OpenStack using a Kilo Controller node,
  Kilo Network node and a Juno compute node, I found that the compute
  node was not registering the hypervisor with the controller. The
  hypervisor-list output was empty but the service-list output showed
  the compute node.  After tracking through the code I found the root of
  the issue:

  During nova-compute startup, I determined that the compute node will
  check to see if it has already registered with the controller by
  querying both the service and compute_nodes tables. I noticed that the
  _get_service call was returning an exception.

  Call flow on the compute node I was looking at:
  /usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py
  update_available_resource
  _update_available_resource
  _init_compute_node
  _get_service  <  NotFound exception caught 
here
  
self.conductor_api.service_get_by_compute_host(context,self.host) 
  conductor/api.py:service_get_all_by

  
  Looking on the controller to determine the source of the exception I found 
where the request is handled:
  
  /usr/lib/python2.7/site-packages/nova/conductor/manager.py -> 
service_get_all_by()
  In this function  the  topic coming in is 'compute' so it is assumed to be a 
request from a Juno compute node.   The services table is queried and 
successful but apparently Juno compute nodes also expect a compute_node field 
in the response that I presume is not present in Kilo. It proceeds to add the 
field and queries the compute_nodes table to determine if the host already 
exists there. This is fine if the host is present in that table, but if it is 
not present, an exception is thrown that is not handled. This causes 
service_get_all_by to not return a result. This propagates all the way back to 
the compute node resulting in the hypervisor not being registered with the 
controller.

  I was able to resolve this by catching the exception in
  service_get_all_by creating the expected field and defaulting it to
  None.

 if topic == 'compute':
  result = self.db.service_get_by_compute_host(context, host)
  # NOTE(sbauza): Only Juno computes are still calling this
  # conductor method for getting service_get_by_compute_node,
  # but expect a compute_node field so we can safely add it.
  try:
  result['compute_node'
 ] = objects.ComputeNodeList.get_all_by_host(
 context, result['host'])
  # FIXME(comstud) Potentially remove this on bump to 
v3.0
  result = [result]
  except Exception:
  result['compute_node'] = None
  result = [result]

  Not sure if this is the correct fix or not but this unblocked me.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1477261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   >