[Yahoo-eng-team] [Bug 1363813] [NEW] Error while running tox

2014-09-01 Thread Nikunj Aggarwal
Public bug reported:

When i am running tox on lasted horion code, the tests are failing.

py27dj15 runtests: commands[1] | /bin/bash run_tests.sh -N --no-pep8
Running Horizon application tests
Traceback (most recent call last):
  File /root/horizon/manage.py, line 23, in module
execute_from_command_line(sys.argv)
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 453, in execute_from_command_line
utility.execute()
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 263, in fetch_command
app_name = get_commands()[subcommand]
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 109, in get_commands
apps = settings.INSTALLED_APPS
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/conf/__init__.py,
 line 53, in __getattr__
self._setup(name)
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/conf/__init__.py,
 line 48, in _setup
self._wrapped = Settings(settings_module)
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/conf/__init__.py,
 line 134, in __init__
raise ImportError(Could not import settings '%s' (Is it on sys.path?): %s 
% (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'horizon.test.settings' (Is it on 
sys.path?): No module named angular
Running openstack_dashboard tests
Traceback (most recent call last):
  File /root/horizon/manage.py, line 23, in module
execute_from_command_line(sys.argv)
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 453, in execute_from_command_line
utility.execute()
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 263, in fetch_command
app_name = get_commands()[subcommand]
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 109, in get_commands
apps = settings.INSTALLED_APPS
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/conf/__init__.py,
 line 53, in __getattr__
self._setup(name)
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/conf/__init__.py,
 line 48, in _setup
self._wrapped = Settings(settings_module)
  File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/conf/__init__.py,
 line 134, in __init__
raise ImportError(Could not import settings '%s' (Is it on sys.path?): %s 
% (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'openstack_dashboard.test.settings' (Is 
it on sys.path?): No module named angular
Tests failed.

___ summary 
___
ERROR:   py26: InterpreterNotFound: python2.6
ERROR:   py27: commands failed
ERROR:   py27dj14: commands failed
ERROR:   py27dj15: commands failed
  pep8: commands succeeded
ERROR:   py33: InterpreterNotFound: python3.3

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1363813

Title:
  Error while running tox

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When i am running tox on lasted horion code, the tests are failing.

  py27dj15 runtests: commands[1] | /bin/bash run_tests.sh -N --no-pep8
  Running Horizon application tests
  Traceback (most recent call last):
File /root/horizon/manage.py, line 23, in module
  execute_from_command_line(sys.argv)
File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 453, in execute_from_command_line
  utility.execute()
File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 392, in execute
  self.fetch_command(subcommand).run_from_argv(self.argv)
File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 263, in fetch_command
  app_name = get_commands()[subcommand]
File 
/root/horizon/.tox/py27dj15/local/lib/python2.7/site-packages/django/core/management/__init__.py,
 line 109, in get_commands
  apps = settings.INSTALLED_APPS
File 

[Yahoo-eng-team] [Bug 1363901] [NEW] HTTP 500 is returned when using an in-used fixed ip to attach interface

2014-09-01 Thread Qin Zhao
Public bug reported:

When I post an 'attach interface' request to Nova with an in-used fixed
ip, Nova returns an HTTP 500 error and a confusing error message.

REQ: curl -i 
'http://10.90.10.24:8774/v2/19abae5746b242d489d1c2862b228d8b/servers/b5cdb8f7-2350-4e28-bf75-7a696dfba73a/os-interface'
 -X POST -H Accept: application/json -H Content-Type: application/json -H 
User-Agent: python-novaclient -H X-Auth-Project-Id: Public -H 
X-Auth-Token: {SHA1}f04a301215d1014df8a0c7a32818235c2c5fbd1a -d 
'{interfaceAttachment: {fixed_ips: [{ip_address: 10.100.99.4}], 
net_id: 173854d5-333f-4c78-b5a5-10d2e9c8d827}}'
INFO (connectionpool:187) Starting new HTTP connection (1): 10.90.10.24
DEBUG (connectionpool:357) POST 
/v2/19abae5746b242d489d1c2862b228d8b/servers/b5cdb8f7-2350-4e28-bf75-7a696dfba73a/os-interface
 HTTP/1.1 500 128
RESP: [500] {'date': 'Mon, 01 Sep 2014 09:02:24 GMT', 'content-length': '128', 
'content-type': 'application/json; charset=UTF-8', 'x-compute-request-id': 
'req-abcdfaab-c208-4089-9e2e-d63bed1e8dfa'}
RESP BODY: {computeFault: {message: The server has either erred or is 
incapable of performing the requested operation., code: 500}}


In fact, Nova works perfect well. The error is caused by my incorrect input. 
Neutron client can return an IpAddressInUseClient exception, so that Nova 
should be able to address the error and return an HTTP 400 error in order to to 
inform the user to correct the request.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363901

Title:
  HTTP 500 is returned when using an in-used fixed ip to attach
  interface

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I post an 'attach interface' request to Nova with an in-used
  fixed ip, Nova returns an HTTP 500 error and a confusing error
  message.

  REQ: curl -i 
'http://10.90.10.24:8774/v2/19abae5746b242d489d1c2862b228d8b/servers/b5cdb8f7-2350-4e28-bf75-7a696dfba73a/os-interface'
 -X POST -H Accept: application/json -H Content-Type: application/json -H 
User-Agent: python-novaclient -H X-Auth-Project-Id: Public -H 
X-Auth-Token: {SHA1}f04a301215d1014df8a0c7a32818235c2c5fbd1a -d 
'{interfaceAttachment: {fixed_ips: [{ip_address: 10.100.99.4}], 
net_id: 173854d5-333f-4c78-b5a5-10d2e9c8d827}}'
  INFO (connectionpool:187) Starting new HTTP connection (1): 10.90.10.24
  DEBUG (connectionpool:357) POST 
/v2/19abae5746b242d489d1c2862b228d8b/servers/b5cdb8f7-2350-4e28-bf75-7a696dfba73a/os-interface
 HTTP/1.1 500 128
  RESP: [500] {'date': 'Mon, 01 Sep 2014 09:02:24 GMT', 'content-length': 
'128', 'content-type': 'application/json; charset=UTF-8', 
'x-compute-request-id': 'req-abcdfaab-c208-4089-9e2e-d63bed1e8dfa'}
  RESP BODY: {computeFault: {message: The server has either erred or is 
incapable of performing the requested operation., code: 500}}

  
  In fact, Nova works perfect well. The error is caused by my incorrect input. 
Neutron client can return an IpAddressInUseClient exception, so that Nova 
should be able to address the error and return an HTTP 400 error in order to to 
inform the user to correct the request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363899] [NEW] HyperV Vm Console Log issues

2014-09-01 Thread Adelina Tuvenie
Public bug reported:

The size of the console log can get bigger than expected because of a small nit 
when checking the existing log file size as well
as a wrong size constant.

The method which gets the serial port pipe at the moment returns a list
which contains at most one element being the actual pipe path. In order
to avoid confusion this should return the pipe path or None instead of a
list.

** Affects: nova
 Importance: Undecided
 Assignee: Adelina Tuvenie (atuvenie)
 Status: In Progress


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363899

Title:
  HyperV Vm Console Log issues

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The size of the console log can get bigger than expected because of a small 
nit when checking the existing log file size as well
  as a wrong size constant.

  The method which gets the serial port pipe at the moment returns a
  list which contains at most one element being the actual pipe path. In
  order to avoid confusion this should return the pipe path or None
  instead of a list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363917] [NEW] Errors on Configuring Federation doumentation

2014-09-01 Thread Marcos Lobo
Public bug reported:

In Configuring Federation documentation section all the external links are break
https://github.com/openstack/keystone/blob/master/doc/source/configure_federation.rst#configuring-federation

I think there is only one link (python-openstackclient) right, all other
are break, I mean, linked to 404 Page Not Found.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363917

Title:
  Errors on Configuring Federation doumentation

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In Configuring Federation documentation section all the external links are 
break
  
https://github.com/openstack/keystone/blob/master/doc/source/configure_federation.rst#configuring-federation

  I think there is only one link (python-openstackclient) right, all
  other are break, I mean, linked to 404 Page Not Found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362039] Re: Cannot Upgrade from Keystone Essex to Keystone Icehouse

2014-09-01 Thread Leigh Hayward
** Project changed: keystone = ubuntu

** Also affects: centos
   Importance: Undecided
   Status: New

** No longer affects: ubuntu

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362039

Title:
  Cannot Upgrade from Keystone Essex to Keystone Icehouse

Status in CentOS:
  New

Bug description:
  When trying to update from Essex to Icehosue in a test environment
  with an existing keystone daabase table I get the following error:

  6:07:38.888 11464 TRACE keystone return versioning_api.upgrade(engine, 
repository, version)#0122014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/migrate/versioning/api.py, line 185, in 
upgrade#0122014-08-26 16:07:38.888 11464 TRACE keystone return 
_migrate(url, repository, version, upgrade=True, err=err, **opts)#0122014-08-26 
16:07:38.888 11464 TRACE keystone   File string, line 2, in 
_migrate#0122014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py, line 
160, in with_engine#0122014-08-26 16:07:38.888 11464 TRACE keystone return 
f(*a, **kw)#012
  2014-08-26 16:07:38.888 11464 CRITICAL keystone [-] OperationalError: 
(OperationalError) (1060, Duplicate column name 'valid') '\nALTER TABLE token 
ADD valid BOOL' ()
  2014-08-26 16:07:38.888 11464 TRACE keystone Traceback (most recent call 
last):
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/bin/keystone-manage, line 51, in module
  2014-08-26 16:07:38.888 11464 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/keystone/cli.py, line 190, in main
  2014-08-26 16:07:38.888 11464 TRACE keystone CONF.command.cmd_class.main()
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/keystone/cli.py, line 66, in main
  2014-08-26 16:07:38.888 11464 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/keystone/common/sql/migration_helpers.py, 
line 139, in sync_database_to_version
  2014-08-26 16:07:38.888 11464 TRACE keystone 
migration.db_sync(sql.get_engine(), abs_path, version=version)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/keystone/openstack/common/db/sqlalchemy/migration.py,
 line 197, in db_sync
  2014-08-26 16:07:38.888 11464 TRACE keystone return 
versioning_api.upgrade(engine, repository, version)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/migrate/versioning/api.py, line 185, in 
upgrade
  2014-08-26 16:07:38.888 11464 TRACE keystone return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File string, line 2, in 
_migrate
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/migrate/versioning/util/__init__.py, line 
160, in with_engine
  2014-08-26 16:07:38.888 11464 TRACE keystone return f(*a, **kw)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/migrate/versioning/api.py, line 364, in 
_migrate
  2014-08-26 16:07:38.888 11464 TRACE keystone schema.runchange(ver, 
change, changeset.step)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/migrate/versioning/schema.py, line 90, in 
runchange
  2014-08-26 16:07:38.888 11464 TRACE keystone change.run(self.engine, step)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/migrate/versioning/script/py.py, line 145, 
in run
  2014-08-26 16:07:38.888 11464 TRACE keystone script_func(engine)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/keystone/common/sql/migrate_repo/versions/003_token_valid.py,
 line 28, in upgrade
  2014-08-26 16:07:38.888 11464 TRACE keystone valid.create(token, 
populate_default=True)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib/python2.6/site-packages/migrate/changeset/schema.py, line 526, in 
create
  2014-08-26 16:07:38.888 11464 TRACE keystone 
engine._run_visitor(visitorcallable, self, connection, **kwargs)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py, line 1479, in 
_run_visitor
  2014-08-26 16:07:38.888 11464 TRACE keystone 
conn._run_visitor(visitorcallable, element, **kwargs)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 
/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py, line 1122, in 
_run_visitor
  2014-08-26 16:07:38.888 11464 TRACE keystone 
**kwargs).traverse_single(element)
  2014-08-26 16:07:38.888 11464 TRACE keystone   File 

[Yahoo-eng-team] [Bug 1351466] Re: can't copy '.../cisco_cfg_agent.ini': doesn't exist

2014-09-01 Thread Ladislav Smola
** Changed in: tripleo
   Status: Fix Released = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1351466

Title:
  can't copy '.../cisco_cfg_agent.ini': doesn't exist

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  Started roughly 1800 UTC this evening

  2014-08-01 19:36:06.878 | error: can't copy
  'etc/neutron/plugins/cisco/cisco_cfg_agent.ini': doesn't exist or not
  a regular file

  http://logs.openstack.org/70/111370/1/check-tripleo/check-tripleo-
  ironic-undercloud-precise-nonha/3bc75ae/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1351466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363932] [NEW] Internal error Enabling Federation Extension

2014-09-01 Thread Marcos Lobo
Public bug reported:

Following steps here
http://docs.openstack.org/developer/keystone/extensions/federation.html
I've realized of a possible bug, but I'm not sure, let me explain
myself.

Step 3 of 
http://docs.openstack.org/developer/keystone/extensions/federation.html
[pipeline:api_v3]
pipeline = access_log sizelimit url_normalize token_auth admin_token_auth 
xml_body json_body ec2_extension s3_extension federation_extension service_v3

Ok, no problems. Restart keystone (under apache) and type keystone
tenant-list command and every is fine, no problems.

Now, modify again keystone-paste.ini file (by the way, on a fresh
keystone installation this file is called keystone-dist-paste.ini by
default) and put federation_extenstion at the end of the line, like:

[pipeline:api_v3]
pipeline = access_log sizelimit url_normalize token_auth admin_token_auth 
xml_body json_body ec2_extension s3_extension service_v3 federation_extension

Restart keystone and when you type keystone tenant-list command,
keystone raises: Internal Server Error 500

This is the log information about this error:

[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] mod_wsgi 
(pid=24803): Target WSGI script '/var/www/cgi-bin/keystone/main' cannot be 
loaded as Python module.
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] mod_wsgi 
(pid=24803): Exception occurred processing WSGI script 
'/var/www/cgi-bin/keystone/main'.
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] Traceback (most 
recent call last):
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/var/www/cgi-bin/keystone/main, line 58, in module
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] name=name)
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] return 
loadobj(APP, uri, name=name, **kw)
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py, line 272, in 
loadobj
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] return 
context.create()
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py, line 710, in create
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] return 
self.object_type.invoke(self)
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py, line 144, in invoke
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] 
**context.local_conf)
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/util.py, line 56, in fix_call
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] val = 
callable(*args, **kw)
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/urlmap.py, line 25, in urlmap_factory
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] app = 
loader.get_app(app_name, global_conf=global_conf)
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py, line 350, in 
get_app
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] name=name, 
global_conf=global_conf).create()
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py, line 362, in 
app_context
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] APP, name=name, 
global_conf=global_conf)
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py, line 450, in 
get_context
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] 
global_additions=global_additions)
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py, line 559, in 
_pipeline_app_context
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] APP, 
pipeline[-1], global_conf)
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py, line 408, in 
get_context
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] object_type, 
name=name)
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164]   File 
/usr/lib/python2.6/site-packages/paste/deploy/loadwsgi.py, line 587, in 
find_config_section
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] self.filename))
[Mon Sep 01 11:28:56 2014] [error] [client 128.142.145.164] LookupError: No 
section 'federation_extension' (prefixed by 'app' or 'application' or 
'composite' or 'composit' or 'pipeline' or 'filter-app') found in config 
/usr/share/keystone/keystone-dist-paste.ini

My question is: Is 

[Yahoo-eng-team] [Bug 1362678] Re: multi-domain has problems with LDAP identity on default domain

2014-09-01 Thread Henry Nash
no problem...that's good to hear.

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1362678

Title:
  multi-domain has problems with LDAP identity on default domain

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  What I try to achieve:

  I want to authenticate all users of the default domain against our
  company's central LDAP server. This works pretty good.

  For Heat I need a user storage that is writable. Our central LDAP
  server can not be written from OpenStack. Therefore I configured the
  heat domain with SQL identity.

  This all works up to the point, when the heat domain admin needs to be
  authorized. This authorization request is always processed with the
  LDAP identity. I don't know whether the domain is missing here for the
  keystone V3 API authorization request or keystone does not route the
  request correctly to the SQL identity. To clarify this, I opened this
  bug and Steven Hardy encouraged me to do so.

  /etc/keystone/keystone.conf:

  [identity]
  default_domain_id=default
  domain_specific_drivers_enabled=true
  domain_config_dir=/etc/keystone/domains
  driver = keystone.identity.backends.ldap.Identity

  [ldap]
  url=ldap://ldap2.open-xchange.com:389
  suffix=dc=open-xchange,dc=com
  etc.

  /etc/keystone/domains/keystone.heat.conf:

  [identity]
  driver = keystone.identity.backends.sql.Identity

  [ldap]

  /etc/heat/heat.conf:
  deferred_auth_method=trusts
  trusts_delegated_roles=heat_stack_owner
  heat_stack_user_role=heat_stack_user
  stack_user_domain=a904d890e0de47dc9f2090c20bb1f45c
  stack_domain_admin=heat_domain_admin
  stack_domain_admin_password=

  openstack --os-token $OS_TOKEN --os-url=http://contorller:5000/v3 
--os-identity-api-version=3 domain list
  
+--+-+-+--+
  | ID   | Name    | Enabled | Description  
    |
  
+--+-+-+--+
  | a904d890e0de47dc9f2090c20bb1f45c | heat    | True    | Owns users and 
projects created by heat  |
  | default  | Default | True    | Owns users and 
tenants (i.e. projects) available on Identity API v2. |
  
+--+-+-+--+

  openstack --os-token $OS_TOKEN --os-url=http://controller:5000/v3 
--os-identity-api-version=3 user create --password  --domain 
a904d890e0de47dc9f2090c20bb1f45c --description Manages users and projects 
created by heat heat_domain_admin
  
+-+-+
  | Field   | Value 
  |
  
+-+-+
  | description | Manages users and projects created by heat
  |
  | domain_id   | a904d890e0de47dc9f2090c20bb1f45c  
  |
  | enabled | True  
  |
  | id  | 38877ca5daed4c9fbbb6c853d3d88e36  
  |
  | links   | {u'self': 
u'http://controller-test:5000/v3/users/38877ca5daed4c9fbbb6c853d3d88e36'} |
  | name    | heat_domain_admin 
  |
  
+-+-+

  openstack --os-token $OS_TOKEN --os-url=http://controller:5000/v3
  --os-identity-api-version=3 role add --user
  38877ca5daed4c9fbbb6c853d3d88e36 --domain
  a904d890e0de47dc9f2090c20bb1f45c admin

  Everything set up according to:
  
http://hardysteven.blogspot.de/2014/04/heat-auth-model-updates-part-1-trusts.html
  
http://hardysteven.blogspot.de/2014/04/heat-auth-model-updates-part-2-stack.html

  I tested this using this example stack: https://github.com/openstack
  /heat-templates/blob/master/hot/software-config/example-templates
  /example-deploy-sequence.yaml

  Then I get the following authentication problem in keystone:
  2014-08-28 13:20:40.172 4915 INFO eventlet.wsgi.server [-] 10.20.31.200 - - 
[28/Aug/2014 13:20:40] POST /v3/auth/tokens HTTP/1.1 201 12110 0.163805
  2014-08-28 13:20:40.326 4915 DEBUG keystone.middleware.core [-] Auth token 
not in the request header. Will not build auth context. process_request 

[Yahoo-eng-team] [Bug 1363955] [NEW] Broken links in doc/source/devref/filter_scheduler.rst

2014-09-01 Thread Rui Chen
Public bug reported:

When you browse directly to
http://docs.openstack.org/developer/nova/devref/filter_scheduler.html,
there were broken links on the following class
'AggregateNumInstancesFilter', 'BaseHostFilter', 'RamWeigher'.

** Affects: nova
 Importance: Undecided
 Assignee: Rui Chen (kiwik-chenrui)
 Status: In Progress


** Tags: documentation

** Changed in: nova
 Assignee: (unassigned) = Rui Chen (kiwik-chenrui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363955

Title:
  Broken links in doc/source/devref/filter_scheduler.rst

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When you browse directly to
  http://docs.openstack.org/developer/nova/devref/filter_scheduler.html,
  there were broken links on the following class
  'AggregateNumInstancesFilter', 'BaseHostFilter', 'RamWeigher'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363967] [NEW] RESTful API to retrieve dvr host mac for ODL

2014-09-01 Thread Vinod Kumar
Public bug reported:

Implementing RESTful interface to retrieve DVR host mac.

** Affects: neutron
 Importance: Undecided
 Assignee: Vinod Kumar (vinod-kumar5)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Vinod Kumar (vinod-kumar5)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1363967

Title:
  RESTful API to retrieve dvr host mac for ODL

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Implementing RESTful interface to retrieve DVR host mac.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1363967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363979] [NEW] ValueError: Unable to determine whether fp is closed.

2014-09-01 Thread Andrey Kurilin
Public bug reported:

Several tests are failed on master with next traceback:

2014-09-01 04:05:08.708 | Traceback (most recent call last):
2014-09-01 04:05:08.708 |   File novaclient/tests/v1_1/test_servers.py, line 
223, in test_update_server
2014-09-01 04:05:08.709 | s.update(name='hi')
2014-09-01 04:05:08.709 |   File novaclient/v1_1/servers.py, line 55, in 
update
2014-09-01 04:05:08.709 | self.manager.update(self, name=name)
2014-09-01 04:05:08.709 |   File novaclient/v1_1/servers.py, line 912, in 
update
2014-09-01 04:05:08.709 | return self._update(/servers/%s % 
base.getid(server), body, server)
2014-09-01 04:05:08.709 |   File novaclient/base.py, line 113, in _update
2014-09-01 04:05:08.709 | _resp, body = self.api.client.put(url, body=body)
2014-09-01 04:05:08.709 |   File novaclient/client.py, line 493, in put
2014-09-01 04:05:08.709 | return self._cs_request(url, 'PUT', **kwargs)
2014-09-01 04:05:08.710 |   File novaclient/client.py, line 465, in 
_cs_request
2014-09-01 04:05:08.710 | resp, body = self._time_request(url, method, 
**kwargs)
2014-09-01 04:05:08.710 |   File novaclient/client.py, line 439, in 
_time_request
2014-09-01 04:05:08.710 | resp, body = self.request(url, method, **kwargs)
2014-09-01 04:05:08.710 |   File novaclient/client.py, line 410, in request
2014-09-01 04:05:08.710 | **kwargs)
2014-09-01 04:05:08.710 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/api.py,
 line 44, in request
2014-09-01 04:05:08.710 | return session.request(method=method, url=url, 
**kwargs)
2014-09-01 04:05:08.710 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/sessions.py,
 line 448, in request
2014-09-01 04:05:08.710 | resp = self.send(prep, **send_kwargs)
2014-09-01 04:05:08.711 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests_mock/mocker.py,
 line 67, in _fake_send
2014-09-01 04:05:08.711 | return self._real_send(session, request, **kwargs)
2014-09-01 04:05:08.711 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/sessions.py,
 line 591, in send
2014-09-01 04:05:08.711 | r.content
2014-09-01 04:05:08.711 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/models.py,
 line 707, in content
2014-09-01 04:05:08.711 | self._content = 
bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
2014-09-01 04:05:08.711 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/models.py,
 line 638, in generate
2014-09-01 04:05:08.711 | for chunk in self.raw.stream(chunk_size, 
decode_content=True):
2014-09-01 04:05:08.711 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/packages/urllib3/response.py,
 line 255, in stream
2014-09-01 04:05:08.712 | while not is_fp_closed(self._fp):
2014-09-01 04:05:08.712 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/packages/urllib3/util/response.py,
 line 22, in is_fp_closed
2014-09-01 04:05:08.712 | raise ValueError(Unable to determine whether fp 
is closed.)
2014-09-01 04:05:08.712 | ValueError: Unable to determine whether fp is closed.

Related to: py26, py27, py33
Tracebacks of all failed tests: http://paste.openstack.org/show/104193/
Full logs:  
http://logs.openstack.org/91/117591/3/check/gate-python-novaclient-python26/ab2eea5/console.html

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: novaclient valueerror

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363979

Title:
  ValueError:  Unable to determine whether fp is closed.

Status in OpenStack Compute (Nova):
  New

Bug description:
  Several tests are failed on master with next traceback:

  2014-09-01 04:05:08.708 | Traceback (most recent call last):
  2014-09-01 04:05:08.708 |   File novaclient/tests/v1_1/test_servers.py, 
line 223, in test_update_server
  2014-09-01 04:05:08.709 | s.update(name='hi')
  2014-09-01 04:05:08.709 |   File novaclient/v1_1/servers.py, line 55, in 
update
  2014-09-01 04:05:08.709 | self.manager.update(self, name=name)
  2014-09-01 04:05:08.709 |   File novaclient/v1_1/servers.py, line 912, in 
update
  2014-09-01 04:05:08.709 | return self._update(/servers/%s % 
base.getid(server), body, server)
  2014-09-01 04:05:08.709 |   File novaclient/base.py, line 113, in _update
  2014-09-01 04:05:08.709 | _resp, body = self.api.client.put(url, 
body=body)
  2014-09-01 04:05:08.709 |   File novaclient/client.py, line 493, in put
  2014-09-01 04:05:08.709 |

[Yahoo-eng-team] [Bug 1363979] Re: [novaclient] tests are failed on master withValueError: Unable to determine whether fp is closed.

2014-09-01 Thread Andrey Kurilin
** Project changed: nova = python-novaclient

** Summary changed:

- [novaclient] tests are failed on master withValueError:  Unable to determine 
whether fp is closed.
+ tests are failed on master withValueError:  Unable to determine whether fp 
is closed.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363979

Title:
  tests are failed on master withValueError:  Unable to determine
  whether fp is closed.

Status in Python client library for Nova:
  New

Bug description:
  Several tests are failed on master with next traceback:

  2014-09-01 04:05:08.708 | Traceback (most recent call last):
  2014-09-01 04:05:08.708 |   File novaclient/tests/v1_1/test_servers.py, 
line 223, in test_update_server
  2014-09-01 04:05:08.709 | s.update(name='hi')
  2014-09-01 04:05:08.709 |   File novaclient/v1_1/servers.py, line 55, in 
update
  2014-09-01 04:05:08.709 | self.manager.update(self, name=name)
  2014-09-01 04:05:08.709 |   File novaclient/v1_1/servers.py, line 912, in 
update
  2014-09-01 04:05:08.709 | return self._update(/servers/%s % 
base.getid(server), body, server)
  2014-09-01 04:05:08.709 |   File novaclient/base.py, line 113, in _update
  2014-09-01 04:05:08.709 | _resp, body = self.api.client.put(url, 
body=body)
  2014-09-01 04:05:08.709 |   File novaclient/client.py, line 493, in put
  2014-09-01 04:05:08.709 | return self._cs_request(url, 'PUT', **kwargs)
  2014-09-01 04:05:08.710 |   File novaclient/client.py, line 465, in 
_cs_request
  2014-09-01 04:05:08.710 | resp, body = self._time_request(url, method, 
**kwargs)
  2014-09-01 04:05:08.710 |   File novaclient/client.py, line 439, in 
_time_request
  2014-09-01 04:05:08.710 | resp, body = self.request(url, method, **kwargs)
  2014-09-01 04:05:08.710 |   File novaclient/client.py, line 410, in request
  2014-09-01 04:05:08.710 | **kwargs)
  2014-09-01 04:05:08.710 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/api.py,
 line 44, in request
  2014-09-01 04:05:08.710 | return session.request(method=method, url=url, 
**kwargs)
  2014-09-01 04:05:08.710 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/sessions.py,
 line 448, in request
  2014-09-01 04:05:08.710 | resp = self.send(prep, **send_kwargs)
  2014-09-01 04:05:08.711 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests_mock/mocker.py,
 line 67, in _fake_send
  2014-09-01 04:05:08.711 | return self._real_send(session, request, 
**kwargs)
  2014-09-01 04:05:08.711 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/sessions.py,
 line 591, in send
  2014-09-01 04:05:08.711 | r.content
  2014-09-01 04:05:08.711 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/models.py,
 line 707, in content
  2014-09-01 04:05:08.711 | self._content = 
bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
  2014-09-01 04:05:08.711 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/models.py,
 line 638, in generate
  2014-09-01 04:05:08.711 | for chunk in self.raw.stream(chunk_size, 
decode_content=True):
  2014-09-01 04:05:08.711 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/packages/urllib3/response.py,
 line 255, in stream
  2014-09-01 04:05:08.712 | while not is_fp_closed(self._fp):
  2014-09-01 04:05:08.712 |   File 
/home/jenkins/workspace/gate-python-novaclient-python26/.tox/py26/lib/python2.6/site-packages/requests/packages/urllib3/util/response.py,
 line 22, in is_fp_closed
  2014-09-01 04:05:08.712 | raise ValueError(Unable to determine whether 
fp is closed.)
  2014-09-01 04:05:08.712 | ValueError: Unable to determine whether fp is 
closed.

  Related to: py26, py27, py33
  Tracebacks of all failed tests: http://paste.openstack.org/show/104193/
  Full logs:  
http://logs.openstack.org/91/117591/3/check/gate-python-novaclient-python26/ab2eea5/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1363979/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364085] [NEW] DBDuplicateEntry: (IntegrityError) duplicate key value violates unique constraint ipavailabilityranges_pkey

2014-09-01 Thread Rossella Sblendido
Public bug reported:

It affects only postgresql
To reproduce this bug:

1) Create a network with few IPs (eg /29)
2) Create 2 VMs that use the /29 network.
3) Destroy the 2 VMs
4) From horizon create a number of VMs  IP available (eg. 8)

When there's no IP available, _rebuild_availability_ranges will be
called to recycle the IPs of the VMs  that no longer exist (the ones
that we created at step 2 ) . From the logs i see that when there's a
bulk creation of ports and the IP range is over, 2 or more
_rebuild_availability_ranges are triggered at the same time by different
port creation operation. This leads to the DBDuplicateEntry, since the
operation that is performed last, will try to insert in the DB stale
data.

See log:

 362399 2014-08-26 10:01:01.926 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
All IPs from subnet a77b383d-e881-49c1-8143-910ec46fe42a (10.238.192.0
/18) allocated _try_generate_ip 
/usr/lib64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:384
 362400 2014-08-26 10:01:01.927 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
Rebuilding availability ranges for subnet {'allocation_pools': [{'star
t': u'10.238.200.129', 'end': u'10.238.200.254'}], 'host_routes': [], 'cidr': 
u'10.238.192.0/18', 'id': u'a77b383d-e881-49c1-8143-910ec46fe42a', 
'name': u'floating', 'enable_dhcp': False, 'network_id': 
u'a4f3c5ac-de4a-44c5-94b8-bd07a14c4d1a', 'tenant_id': u'774289027a8441babaed
f5774e49e971', 'dns_nameservers': [], 'gateway_ip': u'10.238.192.3', 
'ip_version': 4, 'shared': False} _rebuild_availability_ranges /usr/li
b64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:414
 362401 2014-08-26 10:01:01.928 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
Generated mac for network a4f3c5ac-de4a-44c5-94b8-bd07a14c4d1a is fa:1
6:3e:6f:f3:02 _generate_mac 
/usr/lib64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:305
 362402 2014-08-26 10:01:01.952 27821 INFO neutron.wsgi [-] 10.1.100.1 - - 
[26/Aug/2014 10:01:01] GET /v2.0/subnets.json?id=fa32a700-57ce-4bfc-a2b
f-7fb19e20f81b HTTP/1.1 200 504 0.091873
 362403 
 362404 2014-08-26 10:01:01.966 27821 INFO neutron.wsgi [-] (27821) accepted 
('10.1.100.1', 21561)
 362405 
 362406 2014-08-26 10:01:01.977 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
All IPs from subnet a77b383d-e881-49c1-8143-910ec46fe42a (10.238.192.0
/18) allocated _try_generate_ip 
/usr/lib64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:384
 362407 2014-08-26 10:01:01.977 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
Rebuilding availability ranges for subnet {'allocation_pools': [{'star
t': u'10.238.200.129', 'end': u'10.238.200.254'}], 'host_routes': [], 'cidr': 
u'10.238.192.0/18', 'id': u'a77b383d-e881-49c1-8143-910ec46fe42a', 
'name': u'floating', 'enable_dhcp': False, 'network_id': 
u'a4f3c5ac-de4a-44c5-94b8-bd07a14c4d1a', 'tenant_id': u'774289027a8441babaed
f5774e49e971', 'dns_nameservers': [], 'gateway_ip': u'10.238.192.3', 
'ip_version': 4, 'shared': False} _rebuild_availability_ranges 
/usr/lib64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:414

** Affects: neutron
 Importance: Undecided
 Assignee: Rossella Sblendido (rossella-o)
 Status: New


** Tags: postgresql

** Changed in: neutron
 Assignee: (unassigned) = Rossella Sblendido (rossella-o)

** Tags added: postgresql

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364085

Title:
  DBDuplicateEntry: (IntegrityError) duplicate key value violates unique
  constraint ipavailabilityranges_pkey

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It affects only postgresql
  To reproduce this bug:

  1) Create a network with few IPs (eg /29)
  2) Create 2 VMs that use the /29 network.
  3) Destroy the 2 VMs
  4) From horizon create a number of VMs  IP available (eg. 8)

  When there's no IP available, _rebuild_availability_ranges will be
  called to recycle the IPs of the VMs  that no longer exist (the ones
  that we created at step 2 ) . From the logs i see that when there's a
  bulk creation of ports and the IP range is over, 2 or more
  _rebuild_availability_ranges are triggered at the same time by
  different port creation operation. This leads to the DBDuplicateEntry,
  since the operation that is performed last, will try to insert in the
  DB stale data.

  See log:

   362399 2014-08-26 10:01:01.926 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
All IPs from subnet a77b383d-e881-49c1-8143-910ec46fe42a (10.238.192.0
/18) allocated _try_generate_ip 
/usr/lib64/python2.6/site-packages/neutron/db/db_base_plugin_v2.py:384
   362400 2014-08-26 10:01:01.927 27821 DEBUG neutron.db.db_base_plugin_v2 [-] 
Rebuilding availability ranges for subnet {'allocation_pools': [{'star
t': u'10.238.200.129', 'end': u'10.238.200.254'}], 

[Yahoo-eng-team] [Bug 1364133] [NEW] Neutron LBaaS vip invisible in dashboard

2014-09-01 Thread Mike Spreitzer
Public bug reported:

I have a Heat template with an output like this:

  pool_ip_address:
value: {get_attr: [pool, vip, address]}
description: The IP address of the load balancing pool

For that output, the value shows up in command line output (`heat stack-
show`) but in the dashboard the value is invisible; the output name and
description appear in both.

Here is the template source for the LB and pool:

  pool:
type: OS::Neutron::Pool
properties:
  protocol: HTTP
  monitors: [{get_resource: monitor}]
  subnet_id: {get_param: subnet_id}
  lb_method: ROUND_ROBIN
  vip:
protocol_port: 80
  lb:
type: OS::Neutron::LoadBalancer
properties:
  protocol_port: 80
  pool_id: {get_resource: pool}


Here is the relevant part of the `heat stack-show` output:

|  | output_value: 10.0.0.21,spaces snipped/  
 |
|  | description: The IP address of the load 
balancing pool, spaces snipped/   |
|  | output_key: pool_ip_address spaces 
snipped/   |


This is from an install by DevStack today.  Here are the versions I am
running:

ubuntu@mjs-dstk-901a:/opt/stack/horizon$ git branch -v
* master e0abdfa Merge Port details template missing some translation.

ubuntu@mjs-dstk-901a:/opt/stack/horizon$ cd ../neutron/

ubuntu@mjs-dstk-901a:/opt/stack/neutron$ git branch -v
* master 4a91073 Merge Remove old policies from policy.json

ubuntu@mjs-dstk-901a:/opt/stack/neutron$ cd ../python-heatclient/

ubuntu@mjs-dstk-901a:/opt/stack/python-heatclient$ git branch -v
* master 4bc53ac Merge Handle upper cased endpoints

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1364133

Title:
  Neutron LBaaS vip invisible in dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I have a Heat template with an output like this:

pool_ip_address:
  value: {get_attr: [pool, vip, address]}
  description: The IP address of the load balancing pool

  For that output, the value shows up in command line output (`heat
  stack-show`) but in the dashboard the value is invisible; the output
  name and description appear in both.

  Here is the template source for the LB and pool:

pool:
  type: OS::Neutron::Pool
  properties:
protocol: HTTP
monitors: [{get_resource: monitor}]
subnet_id: {get_param: subnet_id}
lb_method: ROUND_ROBIN
vip:
  protocol_port: 80
lb:
  type: OS::Neutron::LoadBalancer
  properties:
protocol_port: 80
pool_id: {get_resource: pool}

  
  Here is the relevant part of the `heat stack-show` output:

  |  | output_value: 10.0.0.21,spaces 
snipped/   |
  |  | description: The IP address of the load 
balancing pool, spaces snipped/   |
  |  | output_key: pool_ip_address spaces 
snipped/   |


  This is from an install by DevStack today.  Here are the versions I am
  running:

  ubuntu@mjs-dstk-901a:/opt/stack/horizon$ git branch -v
  * master e0abdfa Merge Port details template missing some translation.

  ubuntu@mjs-dstk-901a:/opt/stack/horizon$ cd ../neutron/

  ubuntu@mjs-dstk-901a:/opt/stack/neutron$ git branch -v
  * master 4a91073 Merge Remove old policies from policy.json

  ubuntu@mjs-dstk-901a:/opt/stack/neutron$ cd ../python-heatclient/

  ubuntu@mjs-dstk-901a:/opt/stack/python-heatclient$ git branch -v
  * master 4bc53ac Merge Handle upper cased endpoints

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1364133/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364166] [NEW] Netaddr can't check IPv6 subnets for overlap

2014-09-01 Thread Sergey Shnaidman
Public bug reported:

Because of bug of netaddr: https://github.com/drkjam/netaddr/issues/69
netaddr can not check overlapping of IPv6 subnets:
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L572

Netaddr of version higher than 0.7.12 should be used.

Traceback:

DEBUG neutron.api.v2.base [req-98c7e546-5dfa-488e-bf99-ac8c6d11b45e None] 
Request body: {u'subnet': {u'ip_version': 6, u'network_id': 
u'49a71376-b484-481f-a1ee-e2f3c3daca67', u'cidr': u'2003::/64', u'gateway_ip': 
u'2003::1'}} prepare_request_body 
/opt/stack/new/neutron/neutron/api/v2/base.py:578
 598 ERROR neutron.api.v2.resource [req-98c7e546-5dfa-488e-bf99-ac8c6d11b45e 
None] create failed
Traceback (most recent call last):
  File /opt/stack/new/neutron/neutron/api/v2/resource.py, line 87, in resource
result = method(request=request, **args)
  File /opt/stack/new/neutron/neutron/api/v2/base.py, line 448, in create
obj = obj_creator(request.context, **kwargs)
  File /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 660, in 
create_subnet
result = super(Ml2Plugin, self).create_subnet(context, subnet)
  File /opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 1054, in 
create_subnet
self._validate_subnet_cidr(context, network, s['cidr'])
  File /opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 572, in 
_validate_subnet_cidr
if (netaddr.IPSet([subnet.cidr])  new_subnet_ipset):
  File /usr/lib/python2.7/dist-packages/netaddr/ip/sets.py, line 520, in 
__len__
IP addresses! Use the .size property instead. % _sys_maxint)
IndexError: range contains greater than 9223372036854775807 (maxint) IP 
addresses! Use the .size property instead.
 598 TRACE neutron.api.v2.resource
 598 INFO neutron.wsgi [req-98c7e546-5dfa-488e-bf99-ac8c6d11b45e None] 
127.0.0.1 - - [01/Sep/2014 12:56:01] POST /v2.0/subnets HTTP/1.1 500 378 
0.080822
 (598) accepted ('127.0.0.1', 59222)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364166

Title:
  Netaddr can't check IPv6 subnets for overlap

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Because of bug of netaddr: https://github.com/drkjam/netaddr/issues/69
  netaddr can not check overlapping of IPv6 subnets:
  
https://github.com/openstack/neutron/blob/master/neutron/db/db_base_plugin_v2.py#L572

  Netaddr of version higher than 0.7.12 should be used.

  Traceback:

  DEBUG neutron.api.v2.base [req-98c7e546-5dfa-488e-bf99-ac8c6d11b45e None] 
Request body: {u'subnet': {u'ip_version': 6, u'network_id': 
u'49a71376-b484-481f-a1ee-e2f3c3daca67', u'cidr': u'2003::/64', u'gateway_ip': 
u'2003::1'}} prepare_request_body 
/opt/stack/new/neutron/neutron/api/v2/base.py:578
   598 ERROR neutron.api.v2.resource [req-98c7e546-5dfa-488e-bf99-ac8c6d11b45e 
None] create failed
  Traceback (most recent call last):
File /opt/stack/new/neutron/neutron/api/v2/resource.py, line 87, in 
resource
  result = method(request=request, **args)
File /opt/stack/new/neutron/neutron/api/v2/base.py, line 448, in create
  obj = obj_creator(request.context, **kwargs)
File /opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 660, in 
create_subnet
  result = super(Ml2Plugin, self).create_subnet(context, subnet)
File /opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 1054, 
in create_subnet
  self._validate_subnet_cidr(context, network, s['cidr'])
File /opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 572, in 
_validate_subnet_cidr
  if (netaddr.IPSet([subnet.cidr])  new_subnet_ipset):
File /usr/lib/python2.7/dist-packages/netaddr/ip/sets.py, line 520, in 
__len__
  IP addresses! Use the .size property instead. % _sys_maxint)
  IndexError: range contains greater than 9223372036854775807 (maxint) IP 
addresses! Use the .size property instead.
   598 TRACE neutron.api.v2.resource
   598 INFO neutron.wsgi [req-98c7e546-5dfa-488e-bf99-ac8c6d11b45e None] 
127.0.0.1 - - [01/Sep/2014 12:56:01] POST /v2.0/subnets HTTP/1.1 500 378 
0.080822
   (598) accepted ('127.0.0.1', 59222)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362325] Re: Dashboard with slug sahara is not registered

2014-09-01 Thread Akihiro Motoki
** Project changed: horizon = devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1362325

Title:
  Dashboard with slug sahara is not registered

Status in devstack - openstack dev environments:
  Fix Committed

Bug description:
  Saw this in a nova patch going through the check queue:

  http://logs.openstack.org/03/103703/5/check/check-tempest-dsvm-
  full/525cfaf/logs/horizon_error.txt.gz

  [Wed Aug 27 18:07:56.251879 2014] [:error] [pid 20373:tid 140464345409280] 
Internal Server Error: /project/
  [Wed Aug 27 18:07:56.251917 2014] [:error] [pid 20373:tid 140464345409280] 
Traceback (most recent call last):
  [Wed Aug 27 18:07:56.251924 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py, 
line 137, in get_response
  [Wed Aug 27 18:07:56.251929 2014] [:error] [pid 20373:tid 140464345409280]
 response = response.render()
  [Wed Aug 27 18:07:56.251934 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/response.py, line 
105, in render
  [Wed Aug 27 18:07:56.251940 2014] [:error] [pid 20373:tid 140464345409280]
 self.content = self.rendered_content
  [Wed Aug 27 18:07:56.251945 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/response.py, line 
82, in rendered_content
  [Wed Aug 27 18:07:56.251950 2014] [:error] [pid 20373:tid 140464345409280]
 content = template.render(context)
  [Wed Aug 27 18:07:56.251955 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/base.py, line 
140, in render
  [Wed Aug 27 18:07:56.251960 2014] [:error] [pid 20373:tid 140464345409280]
 return self._render(context)
  [Wed Aug 27 18:07:56.251965 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/base.py, line 
134, in _render
  [Wed Aug 27 18:07:56.251970 2014] [:error] [pid 20373:tid 140464345409280]
 return self.nodelist.render(context)
  [Wed Aug 27 18:07:56.251974 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/base.py, line 
840, in render
  [Wed Aug 27 18:07:56.251979 2014] [:error] [pid 20373:tid 140464345409280]
 bit = self.render_node(node, context)
  [Wed Aug 27 18:07:56.251984 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/debug.py, line 
78, in render_node
  [Wed Aug 27 18:07:56.251989 2014] [:error] [pid 20373:tid 140464345409280]
 return node.render(context)
  [Wed Aug 27 18:07:56.251994 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py, 
line 123, in render
  [Wed Aug 27 18:07:56.251999 2014] [:error] [pid 20373:tid 140464345409280]
 return compiled_parent._render(context)
  [Wed Aug 27 18:07:56.252003 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/base.py, line 
134, in _render
  [Wed Aug 27 18:07:56.252009 2014] [:error] [pid 20373:tid 140464345409280]
 return self.nodelist.render(context)
  [Wed Aug 27 18:07:56.252013 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/base.py, line 
840, in render
  [Wed Aug 27 18:07:56.252018 2014] [:error] [pid 20373:tid 140464345409280]
 bit = self.render_node(node, context)
  [Wed Aug 27 18:07:56.252023 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/debug.py, line 
78, in render_node
  [Wed Aug 27 18:07:56.252028 2014] [:error] [pid 20373:tid 140464345409280]
 return node.render(context)
  [Wed Aug 27 18:07:56.252032 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py, 
line 62, in render
  [Wed Aug 27 18:07:56.252037 2014] [:error] [pid 20373:tid 140464345409280]
 result = block.nodelist.render(context)
  [Wed Aug 27 18:07:56.252042 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/base.py, line 
840, in render
  [Wed Aug 27 18:07:56.252047 2014] [:error] [pid 20373:tid 140464345409280]
 bit = self.render_node(node, context)
  [Wed Aug 27 18:07:56.252053 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/debug.py, line 
78, in render_node
  [Wed Aug 27 18:07:56.252065 2014] [:error] [pid 20373:tid 140464345409280]
 return node.render(context)
  [Wed Aug 27 18:07:56.252070 2014] [:error] [pid 20373:tid 140464345409280]   
File /usr/local/lib/python2.7/dist-packages/django/template/loader_tags.py, 
line 62, in 

[Yahoo-eng-team] [Bug 1364171] [NEW] Unexpected SystemExit exception causes the neutron functional job to fail

2014-09-01 Thread Maru Newby
Public bug reported:

The Neutron functional job has been failing periodically since tests for
the l3 agent merged on Aug 29:

https://review.openstack.org/#/c/109860/

The failures leave no useful test output and would appear to indicate
that SystemExit is being raised in the tests so that the test runner
exits prematurely:

http://logs.openstack.org/60/115360/4/check/check-neutron-dsvm-
functional/366b616/console.html


http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkludm9jYXRpb25FcnJvcjogJy9vcHQvc3RhY2svbmV3L25ldXRyb24vLnRveC9kc3ZtLWZ1bmN0aW9uYWwvYmluL3B5dGhvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDk2MTMzMDA3NTJ9

** Affects: neutron
 Importance: Critical
 Status: New

** Changed in: neutron
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364171

Title:
  Unexpected SystemExit exception causes the neutron functional job to
  fail

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The Neutron functional job has been failing periodically since tests
  for the l3 agent merged on Aug 29:

  https://review.openstack.org/#/c/109860/

  The failures leave no useful test output and would appear to indicate
  that SystemExit is being raised in the tests so that the test runner
  exits prematurely:

  http://logs.openstack.org/60/115360/4/check/check-neutron-dsvm-
  functional/366b616/console.html

  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkludm9jYXRpb25FcnJvcjogJy9vcHQvc3RhY2svbmV3L25ldXRyb24vLnRveC9kc3ZtLWZ1bmN0aW9uYWwvYmluL3B5dGhvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDk2MTMzMDA3NTJ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364184] [NEW] functional job fails with 1 failure and not explanation

2014-09-01 Thread Miguel Angel Ajo
*** This bug is a duplicate of bug 1364171 ***
https://bugs.launchpad.net/bugs/1364171

Public bug reported:

http://logs.openstack.org/35/115935/7/check/check-neutron-dsvm-
functional/7dd676a/console.html#_2014-09-01_22_27_23_651

I have seen those randomly appearing.

14-09-01 22:27:08.361 | 2014-09-01 22:27:08.304 | dsvm-functional runtests: 
commands[0] | python -m neutron.openstack.common.lockutils python setup.py 
testr --slowest --testr-args=
2014-09-01 22:27:08.902 | 2014-09-01 22:27:08.882 | running testr
2014-09-01 22:27:23.600 | 2014-09-01 22:27:23.581 | running=OS_STDOUT_CAPTURE=1 
OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover 
-t ./ ${OS_TEST_PATH:-./neutron/tests/unit} --list 
2014-09-01 22:27:23.602 | 2014-09-01 22:27:23.582 | running=OS_STDOUT_CAPTURE=1 
OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover 
-t ./ ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpClMUnK
2014-09-01 22:27:23.639 | 2014-09-01 22:27:23.584 | running=OS_STDOUT_CAPTURE=1 
OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover 
-t ./ ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmp3KcTHu
2014-09-01 22:27:23.641 | 2014-09-01 22:27:23.586 | running=OS_STDOUT_CAPTURE=1 
OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover 
-t ./ ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpvgHvYc
2014-09-01 22:27:23.642 | 2014-09-01 22:27:23.588 | running=OS_STDOUT_CAPTURE=1 
OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover 
-t ./ ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpT86q67
2014-09-01 22:27:23.642 | 2014-09-01 22:27:23.589 | running=OS_STDOUT_CAPTURE=1 
OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover 
-t ./ ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpB8tN3N
2014-09-01 22:27:23.643 | 2014-09-01 22:27:23.591 | running=OS_STDOUT_CAPTURE=1 
OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover 
-t ./ ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpxbZvcn
2014-09-01 22:27:23.644 | 2014-09-01 22:27:23.593 | running=OS_STDOUT_CAPTURE=1 
OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover 
-t ./ ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmpLIEivx
2014-09-01 22:27:23.645 | 2014-09-01 22:27:23.594 | running=OS_STDOUT_CAPTURE=1 
OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 ${PYTHON:-python} -m subunit.run discover 
-t ./ ${OS_TEST_PATH:-./neutron/tests/unit}  --load-list /tmp/tmp_5TnKS
2014-09-01 22:27:23.646 | 2014-09-01 22:27:23.596 | 
==
2014-09-01 22:27:23.647 | 2014-09-01 22:27:23.597 | FAIL: process-returncode
2014-09-01 22:27:23.647 | 2014-09-01 22:27:23.599 | tags: worker-0
2014-09-01 22:27:23.648 | 2014-09-01 22:27:23.601 | 
--
2014-09-01 22:27:23.649 | 2014-09-01 22:27:23.603 | returncode 1
2014-09-01 22:27:23.650 | 2014-09-01 22:27:23.604 | Ran 17 tests in 13.259s
2014-09-01 22:27:23.651 | 2014-09-01 22:27:23.606 | FAILED (id=0, failures=1)
2014-09-01 22:27:23.651 | 2014-09-01 22:27:23.607 | error: testr failed (1)
2014-09-01 22:27:23.652 | 2014-09-01 22:27:23.619 | ERROR: InvocationError: 
'/opt/stack/new/neutron/.tox/dsvm-functional/bin/python -m 
neutron.openstack.common.lockutils python setup.py testr --slowest 
--testr-args='
2014-09-01 22:27:23.653 | 2014-09-01 22:27:23.621 | 
___ summary 
2014-09-01 22:27:23.654 | 2014-09-01 22:27:23.622 | ERROR:   dsvm-functional: 
commands failed
2014-09-01 22:27:23.678 | + RETVAL=1
2014-09-01 22:27:23.678 | + sudo mv 
/home/jenkins/workspace/check-neutron-dsvm-functional/devstack-gate-post-test-hook.txt
 /opt/stack/logs/
2014-09-01 22:27:23.679 | + set +o pipefail
2014-09-01 22:27:23.680 | + set +o xtrace

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364184

Title:
  functional job fails with 1 failure and not explanation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  http://logs.openstack.org/35/115935/7/check/check-neutron-dsvm-
  functional/7dd676a/console.html#_2014-09-01_22_27_23_651

  I have seen those randomly appearing.

  14-09-01 22:27:08.361 | 2014-09-01 22:27:08.304 | dsvm-functional runtests: 
commands[0] | python -m neutron.openstack.common.lockutils python setup.py 
testr --slowest --testr-args=
  2014-09-01 22:27:08.902 | 2014-09-01 22:27:08.882 | running testr
  2014-09-01 22:27:23.600 | 2014-09-01 22:27:23.581 | 
running=OS_STDOUT_CAPTURE=1 OS_STDERR_CAPTURE=1 OS_LOG_CAPTURE=1 
${PYTHON:-python} -m subunit.run discover -t ./ 

[Yahoo-eng-team] [Bug 1353008] Re: MAAS Provider: LXC did not get DHCP address, stuck in pending

2014-09-01 Thread Ian Booth
I'm removing this from 1.20 series as any Juju related work (if any is
required) will be done for the next release.

** No longer affects: juju-core/1.20

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1353008

Title:
  MAAS Provider: LXC did not get DHCP address, stuck in pending

Status in Init scripts for use on cloud images:
  New
Status in juju-core:
  Triaged

Bug description:
  Note, that after I went onto the system, it *did* have an IP address.

    0/lxc/3:
  agent-state: pending
  instance-id: juju-machine-0-lxc-3
  series: trusty
  hardware: arch=amd64

  cloud-init-output.log snip:

  Cloud-init v. 0.7.5 running 'init' at Mon, 04 Aug 2014 23:57:12 +. Up 
572.29 seconds.
  ci-info: +++Net device info+++
  ci-info: ++--+---+---+---+
  ci-info: | Device |  Up  |  Address  |Mask   | Hw-Address|
  ci-info: ++--+---+---+---+
  ci-info: |   lo   | True | 127.0.0.1 | 255.0.0.0 | . |
  ci-info: |  eth0  | True | . | . | 00:16:3e:34:aa:57 |
  ci-info: ++--+---+---+---+
  ci-info: !!!Route info 
failed
  Cloud-init v. 0.7.5 running 'modules:config' at Mon, 04 Aug 2014 23:57:12 
+. Up 572.99 seconds.
  Cloud-init v. 0.7.5 running 'modules:final' at Mon, 04 Aug 2014 23:57:14 
+. Up 574.42 seconds.
  Cloud-init v. 0.7.5 finished at Mon, 04 Aug 2014 23:57:14 +. Datasource 
DataSourceNoCloudNet [seed=/var/lib/cloud/seed/nocloud-net][dsmode=net].  Up 
574.54 seconds

  syslog on system, showing DHCPACK 1 second later:

  root@juju-machine-0-lxc-3:/home/ubuntu# grep DHCP /var/log/syslog
  Aug  4 23:57:13 juju-machine-0-lxc-3 dhclient: DHCPREQUEST of 10.96.3.173 on 
eth0 to 255.255.255.255 port 67 (xid=0x1687c544)
  Aug  4 23:57:13 juju-machine-0-lxc-3 dhclient: DHCPOFFER of 10.96.3.173 from 
10.96.0.10
  Aug  4 23:57:13 juju-machine-0-lxc-3 dhclient: DHCPACK of 10.96.3.173 from 
10.96.0.10
  Aug  5 05:28:15 juju-machine-0-lxc-3 dhclient: DHCPREQUEST of 10.96.3.173 on 
eth0 to 10.96.0.10 port 67 (xid=0x1687c544)
  Aug  5 05:28:15 juju-machine-0-lxc-3 dhclient: DHCPACK of 10.96.3.173 from 
10.96.0.10
  Aug  5 11:15:00 juju-machine-0-lxc-3 dhclient: DHCPREQUEST of 10.96.3.173 on 
eth0 to 10.96.0.10 port 67 (xid=0x1687c544)
  Aug  5 11:15:00 juju-machine-0-lxc-3 dhclient: DHCPACK of 10.96.3.173 from 
10.96.0.10

  It appears in every case, cloud-init init-local has failed very early
  visible in juju logs /var/lib/juju/containers/container/console.log:

  Traceback (most recent call last):
File /usr/bin/cloud-init, line 618, in module
  sys.exit(main())
File /usr/bin/cloud-init, line 614, in main
  get_uptime=True, func=functor, args=(name, args))
File /usr/lib/python2.7/dist-packages/cloudinit/util.py, line 1875, in 
log_time
  ret = func(*args, **kwargs)
File /usr/bin/cloud-init, line 491, in status_wrapper
  force=True)
File /usr/lib/python2.7/dist-packages/cloudinit/util.py, line 1402, in 
sym_link
  os.symlink(source, link)
  OSError: [Errno 2] No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1353008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364215] [NEW] L2 Agent switch to non-dvr mode on first RPC failure

2014-09-01 Thread Vivekanandan Narasimhan
Public bug reported:

The DVR enabled L2 OVS Agent switches to operate in non-dvr mode if the
first RPC call get_dvr_mac_address_by_host() fails during its init_().
After that the L2 Agent sticks to operate in non-dvr mode thereby
ripping off the ability to run DVR on such nodes.

The fix for this bug , is to enable DVR RPC calls to be made on-demand
to the controller only when the first local port on a dvr routed subnet
is detected to be plumbed by the L2 OVS Agent on the node.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1364215

Title:
  L2 Agent switch to non-dvr mode on first RPC failure

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The DVR enabled L2 OVS Agent switches to operate in non-dvr mode if
  the first RPC call get_dvr_mac_address_by_host() fails during its
  init_().  After that the L2 Agent sticks to operate in non-dvr mode
  thereby ripping off the ability to run DVR on such nodes.

  The fix for this bug , is to enable DVR RPC calls to be made on-demand
  to the controller only when the first local port on a dvr routed
  subnet is detected to be plumbed by the L2 OVS Agent on the node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1364215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp