[Yahoo-eng-team] [Bug 1490403] [NEW] Gate failing on test_routerrule_detail

2015-08-30 Thread Frode Nordahl
Public bug reported:

The gate/jenkins checks is currently bombing out on this error:

ERROR: test_routerrule_detail 
(openstack_dashboard.dashboards.project.routers.tests.RouterRuleTests)
--
Traceback (most recent call last):
  File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, in 
instance_stub_out
return fn(self, *args, **kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 711, in test_routerrule_detail
res = self._get_detail(router)
  File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, in 
instance_stub_out
return fn(self, *args, **kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 49, in _get_detail
args=[router.id]))
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 470, in get
**extra)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 286, in get
return self.generic('GET', path, secure=secure, **r)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 358, in generic
return self.request(**r)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 440, in request
six.reraise(*exc_info)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 52, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 84, in dec
return view_func(request, *args, **kwargs)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
return self.dispatch(request, *args, **kwargs)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
return handler(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 146, in get
context = self.get_context_data(**kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/views.py", 
line 140, in get_context_data
context = super(DetailView, self).get_context_data(**kwargs)
  File "/home/ubuntu/horizon/horizon/tables/views.py", line 107, in 
get_context_data
context = super(MultiTableMixin, self).get_context_data(**kwargs)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 56, in 
get_context_data
exceptions.handle(self.request)
  File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 54, in 
get_context_data
context["tab_group"].load_tab_data()
  File "/home/ubuntu/horizon/horizon/tabs/base.py", line 128, in load_tab_data
exceptions.handle(self.request)
  File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/home/ubuntu/horizon/horizon/tabs/base.py", line 125, in load_tab_data
tab._data = tab.get_context_data(self.request)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 82, in get_context_data
data["rulesmatrix"] = self.get_routerrulesgrid_data(rules)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 127, in get_routerrulesgrid_data
source, target, rules))
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 159, in _get_subnet_connectivity
if (int(dst.network) >= int(rd.broadcast) or
TypeError: int() argument must be a string or a number, not 'NoneType'

** Affects: horizon
 Importance: Undecided
 Status: New

** Package changed: horizon (Ubuntu) => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490403

Title:
  Gate failing on test_routerrule_detail

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The gate/jenkins checks is currently bombing out on this error:

  ERROR: test_routerrule_detail 
(openstack_dashboard.dashboards.project.routers.tests.RouterRuleTests)
  --
  Traceback (most recent call last):

[Yahoo-eng-team] [Bug 1490403] [NEW] Gate failing on test_routerrule_detail

2015-08-30 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The gate/jenkins checks is currently bombing out on this error:

ERROR: test_routerrule_detail 
(openstack_dashboard.dashboards.project.routers.tests.RouterRuleTests)
--
Traceback (most recent call last):
  File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, in 
instance_stub_out
return fn(self, *args, **kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 711, in test_routerrule_detail
res = self._get_detail(router)
  File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, in 
instance_stub_out
return fn(self, *args, **kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 49, in _get_detail
args=[router.id]))
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 470, in get
**extra)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 286, in get
return self.generic('GET', path, secure=secure, **r)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 358, in generic
return self.request(**r)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 440, in request
six.reraise(*exc_info)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 52, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 84, in dec
return view_func(request, *args, **kwargs)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
return self.dispatch(request, *args, **kwargs)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
return handler(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 146, in get
context = self.get_context_data(**kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/views.py", 
line 140, in get_context_data
context = super(DetailView, self).get_context_data(**kwargs)
  File "/home/ubuntu/horizon/horizon/tables/views.py", line 107, in 
get_context_data
context = super(MultiTableMixin, self).get_context_data(**kwargs)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 56, in 
get_context_data
exceptions.handle(self.request)
  File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 54, in 
get_context_data
context["tab_group"].load_tab_data()
  File "/home/ubuntu/horizon/horizon/tabs/base.py", line 128, in load_tab_data
exceptions.handle(self.request)
  File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/home/ubuntu/horizon/horizon/tabs/base.py", line 125, in load_tab_data
tab._data = tab.get_context_data(self.request)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 82, in get_context_data
data["rulesmatrix"] = self.get_routerrulesgrid_data(rules)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 127, in get_routerrulesgrid_data
source, target, rules))
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 159, in _get_subnet_connectivity
if (int(dst.network) >= int(rd.broadcast) or
TypeError: int() argument must be a string or a number, not 'NoneType'

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Gate failing on test_routerrule_detail
https://bugs.launchpad.net/bugs/1490403
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490389] [NEW] cannot receive extra args for ostestr via tox

2015-08-30 Thread yong sheng gong
Public bug reported:

ostestr http://docs.openstack.org/developer/os-testr/ostestr.html has many more 
arguments to run test cases. but out tox.ini
is limits the usage to just "--regex".

Such as --serial to run cases in serally etc.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

** Description changed:

  ostestr http://docs.openstack.org/developer/os-testr/ostestr.html has many 
more arguments to run test cases. but out tox.ini
  is limits the usage to just "--regex".
+ 
+ Such as --serial to run cases in serally etc.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490389

Title:
  cannot receive extra args for ostestr via tox

Status in neutron:
  In Progress

Bug description:
  ostestr http://docs.openstack.org/developer/os-testr/ostestr.html has many 
more arguments to run test cases. but out tox.ini
  is limits the usage to just "--regex".

  Such as --serial to run cases in serally etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487372] Re: unretrieve project list when switching project in keystone v3

2015-08-30 Thread Canh Truong
** Project changed: horizon => django-openstack-auth

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487372

Title:
  unretrieve project list when switching project in keystone v3

Status in django-openstack-auth:
  New

Bug description:
  when switching project, the dashboard show error " " , but we can
  still list project list by command. if we logout and login again, the
  dashboard works nomarly.

  Traceback (most recent call last):
File 
"/opt/stack/horizon/openstack_dashboard/dashboards/identity/projects/views.py", 
line 89, in get_data
  marker=marker)
File "/opt/stack/horizon/openstack_dashboard/api/keystone.py", line 290, in 
tenant_list
  tenants = manager.list(**kwargs)
File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/utils.py",
 line 336, in inner
  return func(*args, **kwargs)
File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/v3/projects.py",
 line 106, in list
  **kwargs)
File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py",
 line 73, in func
  return f(*args, **new_kwargs)
File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py",
 line 366, in list
  self.collection_key)
File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py",
 line 113, in _list
  resp, body = self.client.get(url, **kwargs)
File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/adapter.py",
 line 170, in get
  return self.request(url, 'GET', **kwargs)
File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/adapter.py",
 line 206, in request
  resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/adapter.py",
 line 95, in request
  return self.session.request(url, method, **kwargs)
File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/utils.py",
 line 336, in inner
  return func(*args, **kwargs)
File 
"/opt/stack/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/session.py",
 line 397, in request
  raise exceptions.from_response(resp, method, url)
  Unauthorized: The request you have made requires authentication. (Disable 
debug mode to suppress these details.) (HTTP 401) (Request-ID: 
req-70ac41d3-7b89-4017-8617-447358e6dc91)

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1487372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490380] [NEW] netaddr 0.7.16 causes UT havoc

2015-08-30 Thread Armando Migliaccio
Public bug reported:

An example:

http://logs.openstack.org/03/216603/4/check/gate-neutron-
python27/21af647/testr_results.html.gz

** Affects: neutron
 Importance: Critical
 Assignee: Kevin Benton (kevinbenton)
 Status: Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490380

Title:
  netaddr 0.7.16 causes UT havoc

Status in neutron:
  Confirmed

Bug description:
  An example:

  http://logs.openstack.org/03/216603/4/check/gate-neutron-
  python27/21af647/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490380/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490375] [NEW] Volume Type Extra Specs cancel action redirect wrong url

2015-08-30 Thread qiaomin032
Public bug reported:

In the Volume Type Extra Specs page, In create and edit  Volume Type Extra Spec 
modal dialog  the cancel action redirect to wrong url,  when right click the 
cancel button, it will cast error. 
In the QoS Spec page, there has the same error.
Please see the attachment for more detail.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "extra_cancel_error.jpg"
   
https://bugs.launchpad.net/bugs/1490375/+attachment/4454773/+files/extra_cancel_error.jpg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490375

Title:
  Volume Type Extra Specs cancel action redirect wrong url

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the Volume Type Extra Specs page, In create and edit  Volume Type Extra 
Spec modal dialog  the cancel action redirect to wrong url,  when right click 
the cancel button, it will cast error. 
  In the QoS Spec page, there has the same error.
  Please see the attachment for more detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490368] [NEW] test_list_virtual_interfaces fails due to invalid mac address

2015-08-30 Thread Ken'ichi Ohmichi
Public bug reported:

The test failed on the gate like

http://logs.openstack.org/56/217456/2/check/gate-tempest-dsvm-
full/ba8c5ef/logs/testr_results.html.gz

Traceback (most recent call last):
  File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/decorators.py",
 line 40, in wrapper
return f(self, *func_args, **func_kwargs)
  File "tempest/test.py", line 126, in wrapper
return f(self, *func_args, **func_kwargs)
  File "tempest/api/compute/servers/test_virtual_interfaces.py", line 60, in 
test_list_virtual_interfaces
"Invalid mac address detected.")
  File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Invalid mac address detected.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490368

Title:
  test_list_virtual_interfaces fails due to invalid mac address

Status in OpenStack Compute (nova):
  New
Status in tempest:
  New

Bug description:
  The test failed on the gate like

  http://logs.openstack.org/56/217456/2/check/gate-tempest-dsvm-
  full/ba8c5ef/logs/testr_results.html.gz

  Traceback (most recent call last):
File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/decorators.py",
 line 40, in wrapper
  return f(self, *func_args, **func_kwargs)
File "tempest/test.py", line 126, in wrapper
  return f(self, *func_args, **func_kwargs)
File "tempest/api/compute/servers/test_virtual_interfaces.py", line 60, in 
test_list_virtual_interfaces
  "Invalid mac address detected.")
File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true : Invalid mac address detected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1490368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490366] [NEW] "_associate_qos_spec.html" url is wrong

2015-08-30 Thread qiaomin032
Public bug reported:

Reproduce the bug:
1, Open admin volume_type panel, and create a volume type;
2, Right click the "Manage QoS Spec Association" button in the table action and 
open a new page.
3,  There will cast TemplateDoesNotExist error.

Because the  "_associate_qos_spec.html" url in "associate_qos_spec.html"
is wrong,  so there will cast TemplateDoesNotExist error.

please see the attachment for more detail.

** Affects: horizon
 Importance: Undecided
 Assignee: qiaomin032 (chen-qiaomin)
 Status: In Progress

** Attachment added: "qos_template_error.jpg"
   
https://bugs.launchpad.net/bugs/1490366/+attachment/4454754/+files/qos_template_error.jpg

** Summary changed:

- "_associate_qos_spec.html" url is wrong  
+ "_associate_qos_spec.html" url is wrong

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490366

Title:
  "_associate_qos_spec.html" url is wrong

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Reproduce the bug:
  1, Open admin volume_type panel, and create a volume type;
  2, Right click the "Manage QoS Spec Association" button in the table action 
and open a new page.
  3,  There will cast TemplateDoesNotExist error.

  Because the  "_associate_qos_spec.html" url in
  "associate_qos_spec.html" is wrong,  so there will cast
  TemplateDoesNotExist error.

  please see the attachment for more detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490366/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490354] [NEW] Tox exhausting /tmp partition

2015-08-30 Thread Jamie Lennox
Public bug reported:

So every time i tried to run tox i was getting the error message that
/tmp was out of space when trying to create the virtual env which means
that creating the virtualenv was filling a 3gb partition.

In our tox requirements we currently have:

deps = -r{toxinidir}/requirements.txt
   -r{toxinidir}/test-requirements.txt
   .[ldap]
   .[memcache]
   .[mongodb]

By having each of those additional dependencies listed as .[XXX] tox is
installing . (the keystone working directory) into a virtualenv then
determining the entry point. This created 3 seperate /tmp/pip-XXX-build
directories. The other side of this is that my keystone/.testrepository
folder is now 994M so when tox copied this to tmp 3 times and then tried
to install all the dependencies it doesn't leave much space.

There are two fixes to this: 
1. reset my .testrepository database
2. dependencies should be listed together like .[XXX,YYY,ZZZ] so . is only 
copied once

As an addition to this because we are installing dependencies with
.[XXX] which means install . (working directory/keystone) + XXX
dependencies there is no reason to specifically install -r
requirements.txt as this will be handled for us.

** Affects: keystone
 Importance: Undecided
 Assignee: Jamie Lennox (jamielennox)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1490354

Title:
  Tox exhausting /tmp partition

Status in Keystone:
  In Progress

Bug description:
  So every time i tried to run tox i was getting the error message that
  /tmp was out of space when trying to create the virtual env which
  means that creating the virtualenv was filling a 3gb partition.

  In our tox requirements we currently have:

  deps = -r{toxinidir}/requirements.txt
 -r{toxinidir}/test-requirements.txt
 .[ldap]
 .[memcache]
 .[mongodb]

  By having each of those additional dependencies listed as .[XXX] tox
  is installing . (the keystone working directory) into a virtualenv
  then determining the entry point. This created 3 seperate /tmp/pip-
  XXX-build directories. The other side of this is that my
  keystone/.testrepository folder is now 994M so when tox copied this to
  tmp 3 times and then tried to install all the dependencies it doesn't
  leave much space.

  There are two fixes to this: 
  1. reset my .testrepository database
  2. dependencies should be listed together like .[XXX,YYY,ZZZ] so . is only 
copied once

  As an addition to this because we are installing dependencies with
  .[XXX] which means install . (working directory/keystone) + XXX
  dependencies there is no reason to specifically install -r
  requirements.txt as this will be handled for us.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1490354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480778] Re: tox -e py27 raises Error in installdeps

2015-08-30 Thread Masaki Matsushita
I could successfully run tests with run_tests.sh.
It's my mistake, not glance's problem.

** Changed in: glance
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480778

Title:
  tox -e py27 raises Error in installdeps

Status in Glance:
  Invalid

Bug description:
  tox -e py27 raises SyntaxError: don't know how to evaluate 'num' '2.6' in 
installdeps.
  I think we should update test-requirements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490308] [NEW] In DHCP agent's sync_state, get_active_networks_info RPC times out, when there are large number of networks.

2015-08-30 Thread Sudhakar Gariganti
Public bug reported:

In our scale tests, for the scenario of supporting large number of
networks, we encountered frequent RPC timeouts for the
get_active_networks_info call in the sync_state method.

Once this timeout happens, it takes an indefinite amount of time for the
DHCP agent to recover as it keeps doing alot of redundant work.

Assume I am at provisioning some 600th tenant network and fail to enable
the DHCP for that network. So a resync is scheduled for this network
alone.

Now in the sync_state method, we fire get_active_networks_info call,
which doesn't have any 'filters'. Neutron server takes its own sweet
time to return as it had to,

1. fetch all networks from DB which are hosted on this agent and try to 
schedule 
2. fetch subnets info for all networks ,
3. fetch ports info for all networks,

By the time the response comes, agent had already timed out the default
60sec  timeout.

Though the step 1 makes sense for some cases, we don't need to get
subnet and ports info for all the networks, when we actually want to
resync only 1 network.

I think we need to resurrect the get_active_networks RPC and have
filtering in get_active_networks_info RPC.

P.S: Increasing the rpc_timeout is definetly an option, but given the
possible room of improvement in agent code, I do not want to call that
shot already.

** Affects: neutron
 Importance: Undecided
 Assignee: Sudhakar Gariganti (sudhakar-gariganti)
 Status: New


** Tags: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => Sudhakar Gariganti (sudhakar-gariganti)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490308

Title:
  In DHCP agent's sync_state, get_active_networks_info RPC times out,
  when there are large number of networks.

Status in neutron:
  New

Bug description:
  In our scale tests, for the scenario of supporting large number of
  networks, we encountered frequent RPC timeouts for the
  get_active_networks_info call in the sync_state method.

  Once this timeout happens, it takes an indefinite amount of time for
  the DHCP agent to recover as it keeps doing alot of redundant work.

  Assume I am at provisioning some 600th tenant network and fail to
  enable the DHCP for that network. So a resync is scheduled for this
  network alone.

  Now in the sync_state method, we fire get_active_networks_info call,
  which doesn't have any 'filters'. Neutron server takes its own sweet
  time to return as it had to,

  1. fetch all networks from DB which are hosted on this agent and try to 
schedule 
  2. fetch subnets info for all networks ,
  3. fetch ports info for all networks,

  By the time the response comes, agent had already timed out the
  default 60sec  timeout.

  Though the step 1 makes sense for some cases, we don't need to get
  subnet and ports info for all the networks, when we actually want to
  resync only 1 network.

  I think we need to resurrect the get_active_networks RPC and have
  filtering in get_active_networks_info RPC.

  P.S: Increasing the rpc_timeout is definetly an option, but given the
  possible room of improvement in agent code, I do not want to call that
  shot already.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490308/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490306] [NEW] We can not pass the headers in the request PUT in functional tests

2015-08-30 Thread dshakhray
Public bug reported:

To work with blob need pass headers "Content-Type: application / octet-
stream".  Method "_check_artifact_put" for the send request PUT does not
support headers. It does not allow testing of a blob.

** Affects: glance
 Importance: Undecided
 Assignee: dshakhray (dshakhray)
 Status: New


** Tags: artifacts

** Changed in: glance
 Assignee: (unassigned) => dshakhray (dshakhray)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1490306

Title:
  We can not pass the headers in the request  PUT in functional tests

Status in Glance:
  New

Bug description:
  To work with blob need pass headers "Content-Type: application /
  octet-stream".  Method "_check_artifact_put" for the send request PUT
  does not support headers. It does not allow testing of a blob.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1490306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489921] Re: Nova connects to rabbitmq successfully but has invalid credentials

2015-08-30 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489921

Title:
  Nova connects to rabbitmq successfully but has invalid credentials

Status in OpenStack Compute (nova):
  New
Status in oslo.messaging:
  New

Bug description:
  From rabbitmq log:

  =INFO REPORT 28-Aug-2015::10:54:20 ===
  accepting AMQP connection <0.15664.0> (10.0.2.26:55772 -> 10.0.2.8:5672)

  =INFO REPORT 28-Aug-2015::10:54:20 ===
  Mirrored queue 
'q-agent-notifier-security_group-update_fanout_c8d714e02b944c7f91dad2530a34ff01'
 in vhost '/': Adding mirror on node 'rabbit@os-controller-1003': <7448.19519.0>

  =INFO REPORT 28-Aug-2015::10:54:20 ===
  Mirrored queue 
'q-agent-notifier-dvr-update_fanout_87fb0fc8e8224ffb88ea91ee20ad8e29' in vhost 
'/': Adding mirror on node 'rabbit@os-controller-1002': <7447.3416.0>

  =INFO REPORT 28-Aug-2015::10:54:20 ===
  Mirrored queue 
'q-agent-notifier-dvr-update_fanout_87fb0fc8e8224ffb88ea91ee20ad8e29' in vhost 
'/': Adding mirror on node 'rabbit@os-controller-1003': <7448.19521.0>

  =ERROR REPORT 28-Aug-2015::10:54:20 ===
  closing AMQP connection <0.15305.0> (10.0.2.26:55758 -> 10.0.2.8:5672):
  {handshake_error,starting,0,
   {amqp_error,access_refused,
   "AMQPLAIN login refused: user 'openstack' - 
invalid credentials",
   'connection.start_ok'}}

  =INFO REPORT 28-Aug-2015::10:54:21 ===
  accepting AMQP connection <0.15747.0> (10.0.2.26:55773 -> 10.0.2.8:5672)

  From Nova Log:

  2015-08-28 10:54:19.524 14743 DEBUG oslo_concurrency.lockutils 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Lock "compute_resources" 
acquired by "_update_available_resource" :: waited 0.000s inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:444
  2015-08-28 10:54:19.827 14743 INFO nova.compute.resource_tracker 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Total usable vcpus: 48, 
total allocated vcpus: 1
  2015-08-28 10:54:19.827 14743 INFO nova.compute.resource_tracker 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Final resource view: 
name=osc-1001.prd.cin1.corp.hosting.net phys_ram=257524MB used_ram=2560MB 
phys_disk=5GB used_disk=20GB total_vcpus=48 used_vcpus=1 
pci_stats=
  2015-08-28 10:54:19.886 14743 INFO nova.scheduler.client.report 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Compute_service record 
updated for ('osc-1001.prd.cin1.corp.hosting.net', 
'osc-1001.prd.cin1.corp.hosting.net')
  2015-08-28 10:54:19.886 14743 INFO nova.compute.resource_tracker 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Compute_service record 
updated for 
osc-1001.prd.cin1.corp.hosting.net:osc-1001.prd.cin1.corp.hosting.net
  2015-08-28 10:54:19.887 14743 DEBUG oslo_concurrency.lockutils 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Lock "compute_resources" 
released by "_update_available_resource" :: held 0.363s inner 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:456
  2015-08-28 10:54:19.922 14743 DEBUG nova.service 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Creating RPC server for 
service compute start /usr/lib/python2.7/site-packages/nova/service.py:188
  2015-08-28 10:54:19.925 14743 INFO oslo_messaging._drivers.impl_rabbit 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Connecting to AMQP server 
on 10.0.2.8:5672
  2015-08-28 10:54:19.943 14743 INFO oslo_messaging._drivers.impl_rabbit 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Connected to AMQP server 
on 10.0.2.8:5672
  2015-08-28 10:54:19.969 14743 DEBUG nova.service 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] Join ServiceGroup 
membership for this service compute start 
/usr/lib/python2.7/site-packages/nova/service.py:206
  2015-08-28 10:54:19.969 14743 DEBUG nova.servicegroup.drivers.db 
[req-91919deb-c6be-42a8-91f6-557f078f7d19 - - - - -] DB_Driver: join new 
ServiceGroup member osc-1001.prd.cin1.corp.hosting.net to the compute group, 
service =  join 
/usr/lib/python2.7/site-packages/nova/servicegroup/drivers/db.py:59

  Rabbit configuration in nova.conf
  [oslo_messaging_rabbit]
  rabbit_hosts=10.0.2.8:5672,10.0.2.7:5672,10.0.2.6:5672
  rabbit_userid=openstack
  rabbit_password=placeholderpassword

  Functionality seems fine and nothing shows up in the nova log but
  rabbitmq references this hypervisor and with only the nova openstack
  service running i get the message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489334] Re: LBaaS v2- SNI_container_refs attr. should be added to plurals dict manually

2015-08-30 Thread Evgeny Fedoruk
This issue was fixed in https://review.openstack.org/#/c/217296/

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489334

Title:
  LBaaS v2- SNI_container_refs  attr. should be added to plurals dict
  manually

Status in neutron:
  Invalid

Bug description:
  In LBaaS v2 extension. get_resources function manages plural - single
  resource map. sub resources are not handled by
  resource_helper.build_plural_mappings function, hence should be added
  manually as an exception.

  New added in https://review.openstack.org/#/c/216465 patch sni_container_refs 
attribute should be added.
  plugin unit test testing it should be added in 
neutron_lbaas/tests/unit/services/loadbalancer/test_loadbalancer_plugin.py next 
to test_listener_create test function which tests it with empty SNI list

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489334/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490257] [NEW] VMware: nova ignores cinder volume mappings

2015-08-30 Thread Gary Kotton
Public bug reported:

When booting from a volume the Nova driver ignores the cinder
'storrage_policy' and moves the volume to the datastore that nova
selected. Nova should use the cinder datastore

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490257

Title:
  VMware: nova ignores cinder volume mappings

Status in OpenStack Compute (nova):
  New

Bug description:
  When booting from a volume the Nova driver ignores the cinder
  'storrage_policy' and moves the volume to the datastore that nova
  selected. Nova should use the cinder datastore

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1490257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp