[Yahoo-eng-team] [Bug 1490403] [NEW] Gate failing on test_routerrule_detail

2015-08-31 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The gate/jenkins checks is currently bombing out on this error:

ERROR: test_routerrule_detail 
(openstack_dashboard.dashboards.project.routers.tests.RouterRuleTests)
--
Traceback (most recent call last):
  File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, in 
instance_stub_out
return fn(self, *args, **kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 711, in test_routerrule_detail
res = self._get_detail(router)
  File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, in 
instance_stub_out
return fn(self, *args, **kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 49, in _get_detail
args=[router.id]))
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 470, in get
**extra)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 286, in get
return self.generic('GET', path, secure=secure, **r)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 358, in generic
return self.request(**r)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 440, in request
six.reraise(*exc_info)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 52, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 84, in dec
return view_func(request, *args, **kwargs)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
return self.dispatch(request, *args, **kwargs)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
return handler(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 146, in get
context = self.get_context_data(**kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/views.py", 
line 140, in get_context_data
context = super(DetailView, self).get_context_data(**kwargs)
  File "/home/ubuntu/horizon/horizon/tables/views.py", line 107, in 
get_context_data
context = super(MultiTableMixin, self).get_context_data(**kwargs)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 56, in 
get_context_data
exceptions.handle(self.request)
  File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 54, in 
get_context_data
context["tab_group"].load_tab_data()
  File "/home/ubuntu/horizon/horizon/tabs/base.py", line 128, in load_tab_data
exceptions.handle(self.request)
  File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/home/ubuntu/horizon/horizon/tabs/base.py", line 125, in load_tab_data
tab._data = tab.get_context_data(self.request)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 82, in get_context_data
data["rulesmatrix"] = self.get_routerrulesgrid_data(rules)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 127, in get_routerrulesgrid_data
source, target, rules))
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 159, in _get_subnet_connectivity
if (int(dst.network) >= int(rd.broadcast) or
TypeError: int() argument must be a string or a number, not 'NoneType'

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Gate failing on test_routerrule_detail
https://bugs.launchpad.net/bugs/1490403
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490403] [NEW] Gate failing on test_routerrule_detail

2015-08-31 Thread Frode Nordahl
Public bug reported:

The gate/jenkins checks is currently bombing out on this error:

ERROR: test_routerrule_detail 
(openstack_dashboard.dashboards.project.routers.tests.RouterRuleTests)
--
Traceback (most recent call last):
  File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, in 
instance_stub_out
return fn(self, *args, **kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 711, in test_routerrule_detail
res = self._get_detail(router)
  File "/home/ubuntu/horizon/openstack_dashboard/test/helpers.py", line 111, in 
instance_stub_out
return fn(self, *args, **kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/tests.py", 
line 49, in _get_detail
args=[router.id]))
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 470, in get
**extra)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 286, in get
return self.generic('GET', path, secure=secure, **r)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 358, in generic
return self.request(**r)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/test/client.py",
 line 440, in request
six.reraise(*exc_info)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 52, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 36, in dec
return view_func(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/decorators.py", line 84, in dec
return view_func(request, *args, **kwargs)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 69, in view
return self.dispatch(request, *args, **kwargs)
  File 
"/home/ubuntu/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py",
 line 87, in dispatch
return handler(request, *args, **kwargs)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 146, in get
context = self.get_context_data(**kwargs)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/views.py", 
line 140, in get_context_data
context = super(DetailView, self).get_context_data(**kwargs)
  File "/home/ubuntu/horizon/horizon/tables/views.py", line 107, in 
get_context_data
context = super(MultiTableMixin, self).get_context_data(**kwargs)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 56, in 
get_context_data
exceptions.handle(self.request)
  File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/home/ubuntu/horizon/horizon/tabs/views.py", line 54, in 
get_context_data
context["tab_group"].load_tab_data()
  File "/home/ubuntu/horizon/horizon/tabs/base.py", line 128, in load_tab_data
exceptions.handle(self.request)
  File "/home/ubuntu/horizon/horizon/exceptions.py", line 359, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/home/ubuntu/horizon/horizon/tabs/base.py", line 125, in load_tab_data
tab._data = tab.get_context_data(self.request)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 82, in get_context_data
data["rulesmatrix"] = self.get_routerrulesgrid_data(rules)
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 127, in get_routerrulesgrid_data
source, target, rules))
  File 
"/home/ubuntu/horizon/openstack_dashboard/dashboards/project/routers/extensions/routerrules/tabs.py",
 line 159, in _get_subnet_connectivity
if (int(dst.network) >= int(rd.broadcast) or
TypeError: int() argument must be a string or a number, not 'NoneType'

** Affects: horizon
 Importance: Undecided
 Status: New

** Package changed: horizon (Ubuntu) => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490403

Title:
  Gate failing on test_routerrule_detail

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The gate/jenkins checks is currently bombing out on this error:

  ERROR: test_routerrule_detail 
(openstack_dashboard.dashboards.project.routers.tests.RouterRuleTests)
  --
  Traceback (most recent call last):

[Yahoo-eng-team] [Bug 1490764] Re: Wrong handling of domain_id passed as None

2015-08-31 Thread lei zhang
domain_id is not a variable, it can't be None

as in:
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L743

This step is just to judge whether ref['domain_id'] is already exists

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1490764

Title:
  Wrong handling of domain_id passed as None

Status in Keystone:
  Invalid

Bug description:
  Keystone does not handle the domain_id passed as none in controller
  layer, as in:

  
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L743

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1490764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490599] [NEW] Glance returned 500 status code when we add in "depend_on" yourself

2015-08-31 Thread dshakhray
Public bug reported:

ENVIRONMENT: devstack, Glance (master, 30.08.2015)

STEPS TO REPRODUCE:
We have the artifact:
{"description": null, "published_at": null, "tags": [], "depends_on": null, 
"created_at": "2015-08-31T10:30:24.00", "type_name": "MyArtifact", 
"updated_at": "2015-08-31T10:30:24.00", "visibility": "private", "id": 
"3f931cb3-8715-4dff-9d79-639a6853ed14", "type_version": "2.0", "state": 
"creating", "version": "11.0.0", "references": [], "prop1": null, "prop2": 
null, "owner": "a82a48dc05df447baab0afe1770c2be8", "image_file": null, 
"deleted_at": null, "screenshots": [], "int_list": null, "name": "art"}

Send request:
curl -H "X-Auth-Token:e9d6e4a533ba4d40b37ebbb7bbe3a1e5" -H 
"Content-Type:application/json" -X POST -d 
'{"data":"3f931cb3-8715-4dff-9d79-639a6853ed14"}' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/05b88273-b7b1-4689-ab1d-130c5c93e280/depends_on
 -i

ACTUAL RESULT
HTTP/1.1 500 Internal Server Error
Content-Length: 228
Content-Type: text/html; charset=UTF-8
X-Openstack-Request-Id: req-2593f86a-83a3-47df-b129-1341e62cfecd
Date: Mon, 31 Aug 2015 13:20:24 GMT


 
  500 Internal Server Error
 
 
  500 Internal Server Error
  The server has either erred or is incapable of performing the requested 
operation.


 

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1490599

Title:
  Glance returned 500 status code when we add in "depend_on" yourself

Status in Glance:
  New

Bug description:
  ENVIRONMENT: devstack, Glance (master, 30.08.2015)

  STEPS TO REPRODUCE:
  We have the artifact:
  {"description": null, "published_at": null, "tags": [], "depends_on": null, 
"created_at": "2015-08-31T10:30:24.00", "type_name": "MyArtifact", 
"updated_at": "2015-08-31T10:30:24.00", "visibility": "private", "id": 
"3f931cb3-8715-4dff-9d79-639a6853ed14", "type_version": "2.0", "state": 
"creating", "version": "11.0.0", "references": [], "prop1": null, "prop2": 
null, "owner": "a82a48dc05df447baab0afe1770c2be8", "image_file": null, 
"deleted_at": null, "screenshots": [], "int_list": null, "name": "art"}

  Send request:
  curl -H "X-Auth-Token:e9d6e4a533ba4d40b37ebbb7bbe3a1e5" -H 
"Content-Type:application/json" -X POST -d 
'{"data":"3f931cb3-8715-4dff-9d79-639a6853ed14"}' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/05b88273-b7b1-4689-ab1d-130c5c93e280/depends_on
 -i

  ACTUAL RESULT
  HTTP/1.1 500 Internal Server Error
  Content-Length: 228
  Content-Type: text/html; charset=UTF-8
  X-Openstack-Request-Id: req-2593f86a-83a3-47df-b129-1341e62cfecd
  Date: Mon, 31 Aug 2015 13:20:24 GMT

  
   
500 Internal Server Error
   
   
500 Internal Server Error
The server has either erred or is incapable of performing the requested 
operation.


   

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1490599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482301] Re: 'X-Openstack-Request-ID' lenght limited only by header size

2015-08-31 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1482301

Title:
  'X-Openstack-Request-ID' lenght limited only by header size

Status in Glance:
  In Progress
Status in Glance juno series:
  New
Status in Glance kilo series:
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Glance accepts 'X-Openstack-Request-ID' header and includes the value
  in log-files. The length of the Request ID is limited only by
  max_header_line parameter that defaults to 16384. This opens
  possibility to flood the logs.

  Public as this vulnerability was already discussed today on Glance
  weekly meeting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1482301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461678] Re: nova error handling causes glance to keep unlinked files open, wasting space

2015-08-31 Thread Doug Hellmann
** Changed in: python-glanceclient
   Status: Fix Committed => Fix Released

** Changed in: python-glanceclient
Milestone: None => 1.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461678

Title:
  nova error handling causes glance to keep unlinked files open, wasting
  space

Status in OpenStack Compute (nova):
  Fix Committed
Status in python-glanceclient:
  Fix Released

Bug description:
  When creating larger glance images (like a 10GB CentOS7 image), if we
  run into situation where we run out of room on the destination device,
  we cannot recover the space from glance. glance-api will have open
  unlinked files, so a TONNE of space is unavailable until we restart
  glance-api.

  Nova will try to reschedule the instance 3 times, so should see this 
nova-conductor.log :
  u'RescheduledException: Build of instance 
98ca2c0d-44b2-48a6-b1af-55f4b2db73c1 was re-scheduled: [Errno 28] No space left 
on device\n']

  The problem is this code in
  nova.image.glance.GlanceImageService.download():

  if data is None:
  return image_chunks
  else:
  try:
  for chunk in image_chunks:
  data.write(chunk)
  finally:
  if close_file:
  data.close()

  image_chunks is an iterator.  If we take an exception (like we can't
  write the file because the filesystem is full) then we will stop
  iterating over the chunks.  If we don't iterate over all the chunks
  then glance will keep the file open.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229324] Re: extraneous vim editor configuration comments

2015-08-31 Thread Doug Hellmann
** Changed in: python-glanceclient
   Status: Fix Committed => Fix Released

** Changed in: python-glanceclient
Milestone: None => 1.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1229324

Title:
  extraneous vim editor configuration comments

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  In Progress
Status in python-glanceclient:
  Fix Released
Status in python-neutronclient:
  In Progress
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  In Progress
Status in storyboard:
  New
Status in OpenStack Object Storage (swift):
  In Progress
Status in taskflow:
  Fix Released
Status in tempest:
  Fix Released
Status in tuskar:
  Fix Released

Bug description:
  Many of the source code files have a beginning line

  # vim: tabstop=4 shiftwidth=4 softtabstop=4

  This should be deleted.

  Many of these lines are in the ceilometer/openstack/common directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1229324/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1214176] Re: Fix copyright headers to be compliant with Foundation policies

2015-08-31 Thread Doug Hellmann
** Changed in: python-glanceclient
   Status: Fix Committed => Fix Released

** Changed in: python-glanceclient
Milestone: None => 1.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1214176

Title:
  Fix copyright headers to be compliant with Foundation policies

Status in Ceilometer:
  Fix Released
Status in devstack:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Keystone:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-neutronclient:
  Fix Committed
Status in OpenStack Object Storage (swift):
  Fix Released
Status in tempest:
  Fix Committed
Status in Trove:
  Fix Released

Bug description:
  Correct the copyright headers to be consistent with the policies
  outlined by the OpenStack Foundation at http://www.openstack.org/brand
  /openstack-trademark-policy/

  Remove references to OpenStack LLC, replace with OpenStack Foundation

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1214176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179008] Re: rename requires files to standard names

2015-08-31 Thread Doug Hellmann
** Changed in: python-glanceclient
   Status: Fix Committed => Fix Released

** Changed in: python-glanceclient
Milestone: None => 1.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1179008

Title:
  rename requires files to standard names

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in Gear:
  New
Status in git-review:
  Fix Committed
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Keystone:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-neutronclient:
  Fix Committed
Status in python-novaclient:
  Fix Released
Status in python-openstackclient:
  Fix Committed
Status in python-swiftclient:
  Fix Released
Status in Python client library for Zaqar:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Released
Status in Trove:
  Fix Released
Status in Zuul:
  In Progress

Bug description:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1

  Rename tools/pip-requires to requirements.txt and tools/test-requires
  to test-requirements.txt. These are standard files and tools in the
  general world are growing intelligence about them.

   affects ceilometer
   affects cinder
   affects git-review
   affects glance
   affects heat-cfntools
   affects heat
   affects horizon
   affects keystone
   affects nova
   affects openstack-ci
   affects oslo
   affects python-ceilometerclient
   affects python-cinderclient
   affects python-gear
   affects python-glanceclient
   affects python-heatclient
   affects python-keystoneclient
   affects python-novaclient
   affects python-openstackclient
   affects python-quantumclient
   affects python-swiftclient
   affects quantum
   affects reddwarf
   affects swift
   affects tempest
   affects zuul
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.12 (GNU/Linux)
  Comment: Using GnuPG with undefined - http://www.enigmail.net/

  iEYEARECAAYFAlGObegACgkQ2Jv7/VK1RgH+yQCbBuxIZvk/Ra4TEK0TlLqr3xAU
  cj8An1NPMiQ47VubNdsKg6ybymtRRjto
  =fwhb
  -END PGP SIGNATURE-

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1179008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179007] Re: Migrate build system to pbr

2015-08-31 Thread Doug Hellmann
** Changed in: python-glanceclient
   Status: Fix Committed => Fix Released

** Changed in: python-glanceclient
Milestone: None => 1.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1179007

Title:
  Migrate build system to pbr

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in Gear:
  New
Status in git-review:
  Fix Committed
Status in heat:
  Fix Released
Status in heat-cfntools:
  Fix Released
Status in Keystone:
  Fix Released
Status in neutron:
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-neutronclient:
  Fix Committed
Status in python-novaclient:
  Fix Released
Status in python-openstackclient:
  Fix Committed
Status in python-swiftclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Committed
Status in OpenStack Object Storage (swift):
  Fix Released
Status in Trove:
  Fix Released
Status in Zuul:
  New

Bug description:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA1

  openstack.common.setup and openstack.common.version are now in the
  standalone library pbr. Migrating involves moving build config to
  setup.cfg, copying in a stub setup.py file, adding pbr and d2to1 to the
  build depends, removing openstack.common.(setup|version) from the
  filesystem and from openstack-common.conf and making sure *.egg is in
  .gitignore.

   affects ceilometer
   affects cinder
   affects git-review
   affects heat-cfntools
   affects heat
   affects keystone
   affects openstack-ci
   affects oslo
   affects python-ceilometerclient
   affects python-cinderclient
   affects python-gear
   affects python-glanceclient
   affects python-heatclient
   affects python-keystoneclient
   affects python-novaclient
   affects python-openstackclient
   affects python-quantumclient
   affects python-swiftclient
   affects reddwarf
   affects swift
   affects zuul
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.12 (GNU/Linux)
  Comment: Using GnuPG with undefined - http://www.enigmail.net/

  iEYEARECAAYFAlGObdUACgkQ2Jv7/VK1RgFlkACgzycOW0/rPvnLaXXX9/oqYA7q
  kGEAoMaEzGbFEAnsQA6+cEsKIUSMWAPD
  =W8F0
  -END PGP SIGNATURE-

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1179007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348818] Re: Unittests do not succeed with random PYTHONHASHSEED value

2015-08-31 Thread Doug Hellmann
** Changed in: python-glanceclient
   Status: Fix Committed => Fix Released

** Changed in: python-glanceclient
Milestone: None => 1.0.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1348818

Title:
  Unittests do not succeed with random PYTHONHASHSEED value

Status in Ceilometer:
  Fix Released
Status in Cinder:
  In Progress
Status in Cinder icehouse series:
  Fix Committed
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in heat:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Dashboard (Horizon) icehouse series:
  In Progress
Status in Ironic:
  Fix Released
Status in Keystone:
  Fix Released
Status in Keystone icehouse series:
  Fix Committed
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-neutronclient:
  Fix Committed
Status in Sahara:
  Fix Released
Status in Trove:
  Fix Released
Status in WSME:
  Fix Released

Bug description:
  New tox and python3.3 set a random PYTHONHASHSEED value by default.
  These projects should support this in their unittests so that we do
  not have to override the PYTHONHASHSEED value and potentially let bugs
  into these projects.

  To reproduce these failures:

  # install latest tox
  pip install --upgrade tox
  tox --version # should report 1.7.2 or greater
  cd $PROJECT_REPO
  # edit tox.ini to remove any PYTHONHASHSEED=0 lines
  tox -epy27

  Most of these failures appear to be related to dict entry ordering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1348818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490628] [NEW] Dashboard Panels should not include SCSS

2015-08-31 Thread Rajat Vig
Public bug reported:

Currently the following SCSS is included in Dashboard Panels  -
project.scss and identity.scss

// Custom Theme Variables
@import "/custom/variables";
@import "/dashboard/scss/variables";

// Custom Style Variables
@import "/custom/styles";

This introduces multiple inclusion. Instead this needs to be only done
in app.scss.

Additionally, the StaticFileFinder already does find the SCSS files for
the dashboards so the inclusion in

_1000_project.py
and
_3000_identity.py 

is not required.

** Affects: horizon
 Importance: Undecided
 Assignee: Rajat Vig (rajatv)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490628

Title:
  Dashboard Panels should not include SCSS

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Currently the following SCSS is included in Dashboard Panels  -
  project.scss and identity.scss

  // Custom Theme Variables
  @import "/custom/variables";
  @import "/dashboard/scss/variables";

  // Custom Style Variables
  @import "/custom/styles";

  This introduces multiple inclusion. Instead this needs to be only done
  in app.scss.

  Additionally, the StaticFileFinder already does find the SCSS files
  for the dashboards so the inclusion in

  _1000_project.py
  and
  _3000_identity.py 

  is not required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483408] Re: Decryption failure after replacing openssl with cryptography lib

2015-08-31 Thread Skyler Berg
Yes, I switched it to invalid. Thanks for the help with this.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483408

Title:
  Decryption failure after replacing openssl with cryptography lib

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  After updating some packages Python on Tintri CI, the server has started 
failing an ec2 test.
  The logs contain error messages indicating trouble with decryption.

  Most likely breaking change:
  
https://github.com/openstack/nova/commit/452fe92787ff871417846748fc13e2a6a2899325

  Failing tempest test:
  tempest.thirdparty.boto.test_ec2_instance_run.InstanceRunTest

  Operating system: RHEL 7.1

  Failing run logs: http://openstack-ci.tintri.com/tintri/refs-
  changes-37-203237-10/

  Passing logs from before CI started failing: http://openstack-
  ci.tintri.com/tintri/refs-changes-76-182276-28/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479887] Re: Default subnetpools cannot be defined by name

2015-08-31 Thread Carl Baldwin
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479887

Title:
  Default subnetpools cannot be defined by name

Status in neutron:
  Invalid

Bug description:
  The values for default_ipv4_subnet_pool and default_ipv6_subnet_pool
  currently have to be defined as the UUID of the desired subnetpool.
  This leads to a chicken & egg situation where the admin has to somehow
  enter the UUID into the conf file before neutron is initialised, and
  therefore before the UUID can be generated.

  These values should instead be defined by the name of the desired
  subnetpool, so the admin can create it after neutron is started.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372666] Re: n-api and n-cpu receive timeouts from q-svc because of "Lock Wait timeout"

2015-08-31 Thread Armando Migliaccio
query ran today:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQ29ubmVjdGlvbiB0byBuZXV0cm9uIGZhaWxlZDogSFRUUENvbm5lY3Rpb25Qb29sKGhvc3Q9JzEyNy4wLjAuMScsIHBvcnQ9OTY5Nik6IFJlYWQgdGltZWQgb3V0XCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1hcGkudHh0XCIiLCJmaWVsZHMiOlsiYnVpbGRfc3RhdHVzIiwiZmlsZW5hbWUiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDExNDI1MjE4ODg2fQ==

Yielded no results.

This one:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiSFRUUENvbm5lY3Rpb25Qb29sKGhvc3Q9JzEyNy4wLjAuMScsIHBvcnQ9OTY5Nik6IFJlYWQgdGltZWQgb3V0LiAocmVhZCB0aW1lb3V0PTMwKVwiIiwiZmllbGRzIjpbImJ1aWxkX3N0YXR1cyIsImZpbGVuYW1lIl0sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQ0MTA0NDgwODY0OH0=

(message:"HTTPConnectionPool(host='127.0.0.1', port=9696): Read timed
out. (read timeout=30)")

Yields a handful of errors but they are because of Juno builds, and we
ain't gonna fix that.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372666

Title:
  n-api and n-cpu receive timeouts from q-svc because of "Lock Wait
  timeout"

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  This request failed:

  http://logs.openstack.org/12/123112/1/check/check-tempest-dsvm-
  neutron-full/cdb7110/logs/screen-n-api.txt.gz#_2014-09-22_14_16_01_028

  2014-09-22 14:16:01.028 DEBUG nova.api.openstack.wsgi 
[req-bb64d882-d91e-4bff-9407-19277208e277 TestSecurityGroupsBasicOps-454747816 
TestSecurityGroupsBasicOps-1777134551] Calling method '>' _process_stack 
/opt/stack/new/nova/nova/api/openstack/wsgi.py:935
  2014-09-22 14:16:01.063 DEBUG neutronclient.client 
[req-bb64d882-d91e-4bff-9407-19277208e277 TestSecurityGroupsBasicOps-454747816 
TestSecurityGroupsBasicOps-1777134551] 
  REQ: curl -i 
http://127.0.0.1:9696/v2.0/ports.json?device_id=40737ad4-4513-4027-b031-cf7cf519d5b5
 -X GET -H "X-Auth-Token: 916a5769e0ba42339f45c3f6bb00f147" -H "User-Agent: 
python-neutronclient"
   http_log_req 
/opt/stack/new/python-neutronclient/neutronclient/common/utils.py:140
  2014-09-22 14:16:31.065 DEBUG neutronclient.client 
[req-bb64d882-d91e-4bff-9407-19277208e277 TestSecurityGroupsBasicOps-454747816 
TestSecurityGroupsBasicOps-1777134551] throwing ConnectionFailed : 
HTTPConnectionPool(host='127.0.0.1', port=9696): Read timed out. (read 
timeout=30) _cs_request 
/opt/stack/new/python-neutronclient/neutronclient/client.py:132
  2014-09-22 14:16:48.360 ERROR nova.api.openstack 
[req-bb64d882-d91e-4bff-9407-19277208e277 TestSecurityGroupsBasicOps-454747816 
TestSecurityGroupsBasicOps-1777134551] Caught error: Connection to neutron 
failed: HTTPConnectionPool(host='127.0.0.1', port=9696): Read timed out. (read 
timeout=30)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack Traceback (most recent 
call last):
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/__init__.py", line 124, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
req.get_response(self.application)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
646, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
self._call_app(env, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py", line 
624, in _call_app
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
self._app(env, _fake_start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack return 
resp(environ, start_response)
  2014-09-22 14:16:48.360 30109 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2014-09-22 14:16:48.360 30109 TRACE 

[Yahoo-eng-team] [Bug 1490368] Re: test_list_virtual_interfaces fails due to invalid mac address

2015-08-31 Thread Davanum Srinivas (DIMS)
I believe this was caused by netadde 0.7.16, a fix was made in netaddr
and 0.7.17 was released. See details below

https://review.openstack.org/#/c/218720/
https://github.com/drkjam/netaddr/issues/114
https://github.com/drkjam/netaddr/commit/75eee70655597da60123aae7835afb8f66760149



** Changed in: nova
 Assignee: (unassigned) => Davanum Srinivas (DIMS) (dims-v)

** Changed in: nova
   Status: New => Invalid

** Changed in: tempest
   Status: New => Invalid

** Changed in: tempest
 Assignee: (unassigned) => Davanum Srinivas (DIMS) (dims-v)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490368

Title:
  test_list_virtual_interfaces fails due to invalid mac address

Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Invalid

Bug description:
  The test failed on the gate like

  http://logs.openstack.org/56/217456/2/check/gate-tempest-dsvm-
  full/ba8c5ef/logs/testr_results.html.gz

  Traceback (most recent call last):
File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/tempest_lib/decorators.py",
 line 40, in wrapper
  return f(self, *func_args, **func_kwargs)
File "tempest/test.py", line 126, in wrapper
  return f(self, *func_args, **func_kwargs)
File "tempest/api/compute/servers/test_virtual_interfaces.py", line 60, in 
test_list_virtual_interfaces
  "Invalid mac address detected.")
File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true : Invalid mac address detected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1490368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490690] [NEW] Discovery fails for V3 when admin not exposed

2015-08-31 Thread Adam Young
Public bug reported:

V3 is not specifically rtied to either public or Admin in the specs, but
practically speaking, it is  tied to admin;

When attempting to use the V3 api and the admin port is not exposed, the
followng happens:

$ echo $OS_AUTH_URL 
https://hostname/v3

$ openstack server list
ERROR: openstack Unable to establish connection to 
https://hostname:35357/v3/auth/tokens


Running on debug shows more information:
RESP BODY: {"version": {"status": "stable", "updated": "2013-03-06T00:00:00Z", 
"media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v3+xml"}], "id": "v3.0", "links": 
[{"href": "https://keystone-admin.dream.io:35357/v3/;, "rel": "self"}]}}


It is the link in that response being used for discovery.  That should be the 
public URL, not the admin.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1490690

Title:
  Discovery fails for V3 when admin not exposed

Status in Keystone:
  New

Bug description:
  V3 is not specifically rtied to either public or Admin in the specs,
  but practically speaking, it is  tied to admin;

  When attempting to use the V3 api and the admin port is not exposed,
  the followng happens:

  $ echo $OS_AUTH_URL 
  https://hostname/v3

  $ openstack server list
  ERROR: openstack Unable to establish connection to 
https://hostname:35357/v3/auth/tokens

  
  Running on debug shows more information:
  RESP BODY: {"version": {"status": "stable", "updated": 
"2013-03-06T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v3+json"}, {"base": "application/xml", 
"type": "application/vnd.openstack.identity-v3+xml"}], "id": "v3.0", "links": 
[{"href": "https://keystone-admin.dream.io:35357/v3/;, "rel": "self"}]}}

  
  It is the link in that response being used for discovery.  That should be the 
public URL, not the admin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1490690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1123462] Re: run_tests.sh does not work if keystone is not installed

2015-08-31 Thread David Stanek
run_tests.sh was deleted in https://review.openstack.org/#/c/199343/

** Changed in: keystone
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1123462

Title:
  run_tests.sh does not work if keystone is not installed

Status in Keystone:
  Won't Fix

Bug description:
  Attempting to run unit tests from a checked out tree with run_tests.sh
  fails with 38 errors similar to this:

  ==
  ERROR: Failure: ImportError (No module named keystone.common)
  --
  Traceback (most recent call last):
File 
"/home/johannes/virtualenvs/keystone/lib/python2.6/site-packages/nose/loader.py",
 line 390, in loadTestsFromName
  addr.filename, addr.module)
File 
"/home/johannes/virtualenvs/keystone/lib/python2.6/site-packages/nose/importer.py",
 line 39, in importFromPath
  return self.importFromDir(dir_path, fqname)
File 
"/home/johannes/virtualenvs/keystone/lib/python2.6/site-packages/nose/importer.py",
 line 86, in importFromDir
  mod = load_module(part_fqname, fh, filename, desc)
File "/home/johannes/openstack/keystone/trunk/tests/test_wsgi.py", line 19, 
in 
  from keystone.common import wsgi
  ImportError: No module named keystone.common

  It appears that the python path is not setup correctly when using
  run_tests.sh. This same workflow works in nova, glance and quantum.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1123462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461183] Re: keystone/tests/unit/test_v3.py:RestfulTestCase.load_sample_data still uses the assignment_api

2015-08-31 Thread David Stanek
The assignment's add_role_to_user_and_project is not deprecated and can
be used by the tests. Once some of the assignment_api was deprecated
when the resource_api was created, not all of it.

** Changed in: keystone
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461183

Title:
  keystone/tests/unit/test_v3.py:RestfulTestCase.load_sample_data still
  uses the assignment_api

Status in Keystone:
  Won't Fix

Bug description:
  All test classes that inherit
  keystone/tests/unit/test_v3.py:RestfulTestCase run a load_sample_data
  method [0]. This method creates some sample data to test with and it
  still uses the assignment API, which has been deprecated. This method
  should be refactored to use the resource API instead.

  
  [0] 
https://github.com/openstack/keystone/blob/f6c01dd1673b290578e9fff063e27104412ffeda/keystone/tests/unit/test_v3.py#L235-L240

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490704] [NEW] ESLint glob pattern not matching all files for lint

2015-08-31 Thread Rajat Vig
Public bug reported:

The file glob pattern in package.json

*/static openstack_dashboard/dashboards/*/static

is not running eslint on the openstack_dashboard/static files.

The files in that package have unaddressed warnings.

** Affects: horizon
 Importance: Undecided
 Assignee: Rajat Vig (rajatv)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490704

Title:
  ESLint glob pattern not matching all files for lint

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The file glob pattern in package.json

  */static openstack_dashboard/dashboards/*/static

  is not running eslint on the openstack_dashboard/static files.

  The files in that package have unaddressed warnings.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490743] [NEW] Attempt to call strftime on a str fails revocation_list

2015-08-31 Thread Timothy Symanczyk
Public bug reported:

This bug is nearly identical to the old bug
https://bugs.launchpad.net/keystone/+bug/1285871 , same symptom and
nearly identical code  . It is currently causing barbican to fail for
us.

2015-08-21 03:17:05.244 22742 ERROR keystone.common.wsgi [-] 'str' object has 
no a
ttribute 'strftime'
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi Traceback (most recent 
ca
ll last):
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 239, in 
__call__
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi result = 
method(context, **params)
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 158, in 
inner
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi return f(self, 
context, *args, **kwargs)
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 549, in 
revocation_list
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi t['expires'] = 
timeutils.isotime(expires)
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_utils/timeutils.py", line 52, in isotime
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi st = 
at.strftime(_ISO8601_TIME_FORMAT
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi AttributeError: 'str' 
object has no attribute 'strftime'
2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi 
2015-08-21 03:17:05.263 22742 INFO eventlet.wsgi.server [-] 
100.72.132.128,10.50.249.37 - - [21/Aug/2015 03:17:05] "GET 
/v3/auth/tokens/OS-PKI/revoked HTTP/1.1" 500 470 0.063453

** Affects: keystone
 Importance: Undecided
 Assignee: Timothy Symanczyk (timothy-symanczyk)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Timothy Symanczyk (timothy-symanczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1490743

Title:
  Attempt to call strftime on a str fails revocation_list

Status in Keystone:
  New

Bug description:
  This bug is nearly identical to the old bug
  https://bugs.launchpad.net/keystone/+bug/1285871 , same symptom and
  nearly identical code  . It is currently causing barbican to fail for
  us.

  2015-08-21 03:17:05.244 22742 ERROR keystone.common.wsgi [-] 'str' object has 
no a
  ttribute 'strftime'
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi Traceback (most 
recent ca
  ll last):
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 239, in 
__call__
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi result = 
method(context, **params)
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 158, in 
inner
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi return f(self, 
context, *args, **kwargs)
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/auth/controllers.py", line 549, in 
revocation_list
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi t['expires'] = 
timeutils.isotime(expires)
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/oslo_utils/timeutils.py", line 52, in isotime
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi st = 
at.strftime(_ISO8601_TIME_FORMAT
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi AttributeError: 
'str' object has no attribute 'strftime'
  2015-08-21 03:17:05.244 22742 TRACE keystone.common.wsgi 
  2015-08-21 03:17:05.263 22742 INFO eventlet.wsgi.server [-] 
100.72.132.128,10.50.249.37 - - [21/Aug/2015 03:17:05] "GET 
/v3/auth/tokens/OS-PKI/revoked HTTP/1.1" 500 470 0.063453

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1490743/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490748] [NEW] Login directive no longer hides domain and region dropdown

2015-08-31 Thread Thai Tran
Public bug reported:

Recent linting changes broken the login directive. When user selects a
websso authentication type like saml or openid, the domain and region
list no longer hides.

** Affects: horizon
 Importance: High
 Assignee: Thai Tran (tqtran)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490748

Title:
  Login directive no longer hides domain and region dropdown

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Recent linting changes broken the login directive. When user selects a
  websso authentication type like saml or openid, the domain and region
  list no longer hides.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490748/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479346] Re: LBaaS: test_healthmonitor_basic scenario fails

2015-08-31 Thread Madhusudhan Kandadai
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479346

Title:
  LBaaS: test_healthmonitor_basic scenario fails

Status in neutron:
  Fix Released

Bug description:
  test_healthmonitor_basic fails in Jenkins as well as when it is run
  locally using tox -e scenario

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron_lbaas/tests/tempest/lib/test.py", line 310, in 
tearDownClass
  six.reraise(etype, value, trace)
File "neutron_lbaas/tests/tempest/lib/test.py", line 293, in 
tearDownClass
  teardown()
File "neutron_lbaas/tests/tempest/lib/test.py", line 502, in 
clear_isolated_creds
  cls._creds_provider.clear_isolated_creds()
File "neutron_lbaas/tests/tempest/lib/common/isolated_creds.py", line 
415, in clear_isolated_creds
  self._clear_isolated_net_resources()
File "neutron_lbaas/tests/tempest/lib/common/isolated_creds.py", line 
406, in _clear_isolated_net_resources
  creds.subnet['name'])
File "neutron_lbaas/tests/tempest/lib/common/isolated_creds.py", line 
357, in _clear_isolated_subnet
  net_client.delete_subnet(subnet_id)
File 
"neutron_lbaas/tests/tempest/lib/services/network/json/network_client.py", line 
113, in _delete
  resp, body = self.delete(uri)
File 
"/opt/stack/neutron-lbaas/.tox/scenario/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 287, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File 
"/opt/stack/neutron-lbaas/.tox/scenario/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 643, in request
  resp, resp_body)
File 
"/opt/stack/neutron-lbaas/.tox/scenario/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 705, in _error_checker
  raise exceptions.Conflict(resp_body)
  tempest_lib.exceptions.Conflict: An object with that identifier already 
exists
  Details: {u'message': u'Unable to complete operation on subnet 
8a98d02b-13c5-4d68-8908-18cc2ba868a3. One or more ports have an IP allocation 
from this subnet.', u'detail': u'', u'type': u'SubnetInUse'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490764] [NEW] Wrong handling of domain_id passed as None

2015-08-31 Thread Henrique Truta
Public bug reported:

Keystone does not handle the domain_id passed as none in controller
layer, as in:

https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L743

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1490764

Title:
  Wrong handling of domain_id passed as None

Status in Keystone:
  New

Bug description:
  Keystone does not handle the domain_id passed as none in controller
  layer, as in:

  
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L743

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1490764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490767] [NEW] DB migration of geneve type driver should be 'expand'

2015-08-31 Thread Itsuro Oda
Public bug reported:

https://review.openstack.org/#/c/187945/

It only do adding table only definitely. So it should be in 'expand'
directory.

It was merged already. I don't know how to fix though.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490767

Title:
  DB migration of geneve type driver should be 'expand'

Status in neutron:
  New

Bug description:
  https://review.openstack.org/#/c/187945/

  It only do adding table only definitely. So it should be in 'expand'
  directory.

  It was merged already. I don't know how to fix though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479346] Re: LBaaS: test_healthmonitor_basic scenario fails

2015-08-31 Thread Armando Migliaccio
This is not released yet. Leave it to Jenkins, that will know when this
is 'released' properly.

** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479346

Title:
  LBaaS: test_healthmonitor_basic scenario fails

Status in neutron:
  Fix Committed

Bug description:
  test_healthmonitor_basic fails in Jenkins as well as when it is run
  locally using tox -e scenario

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron_lbaas/tests/tempest/lib/test.py", line 310, in 
tearDownClass
  six.reraise(etype, value, trace)
File "neutron_lbaas/tests/tempest/lib/test.py", line 293, in 
tearDownClass
  teardown()
File "neutron_lbaas/tests/tempest/lib/test.py", line 502, in 
clear_isolated_creds
  cls._creds_provider.clear_isolated_creds()
File "neutron_lbaas/tests/tempest/lib/common/isolated_creds.py", line 
415, in clear_isolated_creds
  self._clear_isolated_net_resources()
File "neutron_lbaas/tests/tempest/lib/common/isolated_creds.py", line 
406, in _clear_isolated_net_resources
  creds.subnet['name'])
File "neutron_lbaas/tests/tempest/lib/common/isolated_creds.py", line 
357, in _clear_isolated_subnet
  net_client.delete_subnet(subnet_id)
File 
"neutron_lbaas/tests/tempest/lib/services/network/json/network_client.py", line 
113, in _delete
  resp, body = self.delete(uri)
File 
"/opt/stack/neutron-lbaas/.tox/scenario/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 287, in delete
  return self.request('DELETE', url, extra_headers, headers, body)
File 
"/opt/stack/neutron-lbaas/.tox/scenario/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 643, in request
  resp, resp_body)
File 
"/opt/stack/neutron-lbaas/.tox/scenario/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 705, in _error_checker
  raise exceptions.Conflict(resp_body)
  tempest_lib.exceptions.Conflict: An object with that identifier already 
exists
  Details: {u'message': u'Unable to complete operation on subnet 
8a98d02b-13c5-4d68-8908-18cc2ba868a3. One or more ports have an IP allocation 
from this subnet.', u'detail': u'', u'type': u'SubnetInUse'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489200] Re: Upon VM deletes, SG iptables not cleaned up, garbage piles up

2015-08-31 Thread Ramu Ramamurthy
I applied the following patch released in the later kilo release 
(neutron/2015.1.1)

- [81e043f] Don't delete port from bridge on delete_port event
https://bugs.launchpad.net/neutron/+bug/165

and the problem is not seen anymore.


** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489200

Title:
  Upon VM deletes, SG iptables not cleaned up, garbage piles up

Status in neutron:
  Fix Released

Bug description:
  Summary:  40 VMs are created and then deleted on the same host. At the
  end of this, I find that iptables rules for some ports are not cleaned
  up, and remain as garbage. This garbage keeps piling up, as more VMs
  are created and deleted.

  Topology:
   Openstack Kilo, with Neutron Network using OVS & neutron 
security groups.
   Kilo Component versions are as follows:
openstack-neutron-2015.1.0.2
openstack-neutron-ml2-2015.1.0.2   
openstack-neutron-openvswitch-2015.1.0.2

  Test Case:

   1) create 1 network, 1 subnetwork
   2) boot 40 VMs on one hypervisor  and 40 VMs on another 
hypervisor using the default Security Group
   3) Run some traffic tests between VMs
   4) delete all VMs

  Result:
     Find that iptables rules are not cleaned up for the ports 
of the VMs

  Root Cause:
   In the neutron-ovs-agent polling loop, there is an exception 
during the processing of port events.
  As a result of this exception, the neutron-ovs-agent resyncs 
with plugin. This takes a while, At the same
     time, VM ports are getting deleted. In this scenario, the 
neutron-ovs-agent "misses" some deleted ports, and
    does not cleanup SG filters for those "missed" ports

  Reproducability:

    Happens almost every time. With more number of VMs,
  it is more likely

  Logs:

   Attached are a set of neutron-ovs-agent logs, and the
  garbage iptables rules that remain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490787] [NEW] show_numbers(self.data) on Pie Charts are broken

2015-08-31 Thread Diana Whitten
Public bug reported:

show_numbers(self.data) in
horizon/static/horizon/js/horizon.d3piechart.js obviously doesn't work
as intended any longer.  If you turn this on, It prints: [object Object]
on top of the Pie Chart. o.O

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490787

Title:
  show_numbers(self.data) on Pie Charts are broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  show_numbers(self.data) in
  horizon/static/horizon/js/horizon.d3piechart.js obviously doesn't work
  as intended any longer.  If you turn this on, It prints: [object
  Object] on top of the Pie Chart. o.O

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490793] [NEW] Filtering volume sources doesn't work in ng launch instance wizard

2015-08-31 Thread Justin Pomeroy
Public bug reported:

Make sure you have at least one volume with an image as its source.  In
the angular Launch Instance wizard, select "Volume" as the boot source
and then try to filter the list of available volumes based on the type.
For example, if the type is RAW, entering "raw" into the filter will not
show this volume.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490793

Title:
  Filtering volume sources doesn't work in ng launch instance wizard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Make sure you have at least one volume with an image as its source.
  In the angular Launch Instance wizard, select "Volume" as the boot
  source and then try to filter the list of available volumes based on
  the type.  For example, if the type is RAW, entering "raw" into the
  filter will not show this volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490491] [NEW] Glance returned 500 status code when we add/replace element to blob property

2015-08-31 Thread dshakhray
Public bug reported:

ENVIRONMENT: devstack, Glance (master, 30.08.2015)

STEPS TO REPRODUCE:
We tried add/replace element to blob property on artifact.
Send request
1) curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H 
"Content-Type:application/octet-stream" -X PATCH -d '[{"op": "add", "path": 
"/image_file", "value": "la-la-la"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4-99bd-0e6232a80142
 -i

2) curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H "Content-
Type:application/octet-stream" -X PATCH -d '[{"op": "replace", "path":
"/image_file", "value": "la-la-la"}]'
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4
-99bd-0e6232a80142 -i

EXPECTED RESULT:
status code 200 and added/replaced blob property

ACTUAL RESULT:

HTTP/1.1 500 Internal Server Error
Content-Length: 228
Content-Type: text/html; charset=UTF-8
X-Openstack-Request-Id: req-322bf960-5756-4f6f-af46-f80705ee79c2
Date: Mon, 31 Aug 2015 10:07:18 GMT


 
  500 Internal Server Error
 
 
  500 Internal Server Error
  The server has either erred or is incapable of performing the requested 
operation.


 


** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1490491

Title:
  Glance returned 500 status code when we add/replace element to blob
  property

Status in Glance:
  New

Bug description:
  ENVIRONMENT: devstack, Glance (master, 30.08.2015)

  STEPS TO REPRODUCE:
  We tried add/replace element to blob property on artifact.
  Send request
  1) curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H 
"Content-Type:application/octet-stream" -X PATCH -d '[{"op": "add", "path": 
"/image_file", "value": "la-la-la"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4-99bd-0e6232a80142
 -i

  2) curl -H "X-Auth-Token: b582b953413b4a8896bfa27e1b70d4e0" -H
  "Content-Type:application/octet-stream" -X PATCH -d '[{"op":
  "replace", "path": "/image_file", "value": "la-la-la"}]'
  http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/4d1e26e5-87b7-49b4
  -99bd-0e6232a80142 -i

  EXPECTED RESULT:
  status code 200 and added/replaced blob property

  ACTUAL RESULT:

  HTTP/1.1 500 Internal Server Error
  Content-Length: 228
  Content-Type: text/html; charset=UTF-8
  X-Openstack-Request-Id: req-322bf960-5756-4f6f-af46-f80705ee79c2
  Date: Mon, 31 Aug 2015 10:07:18 GMT

  
   
500 Internal Server Error
   
   
500 Internal Server Error
The server has either erred or is incapable of performing the requested 
operation.


   
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1490491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490511] [NEW] Glance returned 500 status code when we download removed blob property

2015-08-31 Thread dshakhray
Public bug reported:

ENVIRONMENT: devstack, Glance (master, 30.08.2015)

STEPS TO REPRODUCE:
We tried dowload element to blob property on artifact.
We have the artifact:
{"description": null, "published_at": null, "tags": [], "depends_on": null, 
"created_at": "2015-08-31T10:30:24.00", "type_name": "MyArtifact", 
"updated_at": "2015-08-31T10:30:24.00", "visibility": "private", "id": 
"3f931cb3-8715-4dff-9d79-639a6853ed14", "type_version": "2.0", "state": 
"creating", "version": "11.0.0", "references": [], "prop1": null, "prop2": 
null, "owner": "a82a48dc05df447baab0afe1770c2be8", "image_file": null, 
"deleted_at": null, "screenshots": [], "int_list": null, "name": "art"}

Send request:
curl -H "X-Auth-Token: 7df867f7b5ae42778b355a74b6d5d76c" -H 
"Content-Type:application/octet-stream" -X PATCH -d '[{"op": "remove", "path": 
"/image_file"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-9d79-639a6853ed14
 -i

Get response:
HTTP/1.1 200 OK
Content-Length: 640
Content-Type: application/json; charset=UTF-8
X-Openstack-Request-Id: req-42927006-cdaa-4a67-9143-8427830a1228
Date: Mon, 31 Aug 2015 10:34:39 GMT

{"description": null, "published_at": null, "tags": [], "depends_on":
null, "created_at": "2015-08-31T10:30:24.00", "type_name":
"MyArtifact", "updated_at": "2015-08-31T10:30:24.00", "visibility":
"private", "id": "3f931cb3-8715-4dff-9d79-639a6853ed14", "type_version":
"2.0", "state": "creating", "version": "11.0.0", "references": [],
"prop1": null, "prop2": null, "owner":
"a82a48dc05df447baab0afe1770c2be8", "image_file": {"checksum": null,
"download_link": "/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-
9d79-639a6853ed14/image_file/download", "size": 0}, "deleted_at": null,
"screenshots": [], "int_list": null, "name": "art"}

We tried to download the "image_file" link above, send request:
curl -H "X-Auth-Token: 7df867f7b5ae42778b355a74b6d5d76c" -H 
"Content-Type:application/octet-stream" -X GET 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-9d79-639a6853ed14/image_file/download
 -i

ACTUAL RESULT:
HTTP/1.1 500 Internal Server Error
Content-Type: text/plain
Content-Length: 0
Date: Mon, 31 Aug 2015 10:43:51 GMT
Connection: close

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1490511

Title:
  Glance returned 500 status code when we download removed blob property

Status in Glance:
  New

Bug description:
  ENVIRONMENT: devstack, Glance (master, 30.08.2015)

  STEPS TO REPRODUCE:
  We tried dowload element to blob property on artifact.
  We have the artifact:
  {"description": null, "published_at": null, "tags": [], "depends_on": null, 
"created_at": "2015-08-31T10:30:24.00", "type_name": "MyArtifact", 
"updated_at": "2015-08-31T10:30:24.00", "visibility": "private", "id": 
"3f931cb3-8715-4dff-9d79-639a6853ed14", "type_version": "2.0", "state": 
"creating", "version": "11.0.0", "references": [], "prop1": null, "prop2": 
null, "owner": "a82a48dc05df447baab0afe1770c2be8", "image_file": null, 
"deleted_at": null, "screenshots": [], "int_list": null, "name": "art"}

  Send request:
  curl -H "X-Auth-Token: 7df867f7b5ae42778b355a74b6d5d76c" -H 
"Content-Type:application/octet-stream" -X PATCH -d '[{"op": "remove", "path": 
"/image_file"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-9d79-639a6853ed14
 -i

  Get response:
  HTTP/1.1 200 OK
  Content-Length: 640
  Content-Type: application/json; charset=UTF-8
  X-Openstack-Request-Id: req-42927006-cdaa-4a67-9143-8427830a1228
  Date: Mon, 31 Aug 2015 10:34:39 GMT

  {"description": null, "published_at": null, "tags": [], "depends_on":
  null, "created_at": "2015-08-31T10:30:24.00", "type_name":
  "MyArtifact", "updated_at": "2015-08-31T10:30:24.00",
  "visibility": "private", "id": "3f931cb3-8715-4dff-9d79-639a6853ed14",
  "type_version": "2.0", "state": "creating", "version": "11.0.0",
  "references": [], "prop1": null, "prop2": null, "owner":
  "a82a48dc05df447baab0afe1770c2be8", "image_file": {"checksum": null,
  "download_link": "/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-
  9d79-639a6853ed14/image_file/download", "size": 0}, "deleted_at":
  null, "screenshots": [], "int_list": null, "name": "art"}

  We tried to download the "image_file" link above, send request:
  curl -H "X-Auth-Token: 7df867f7b5ae42778b355a74b6d5d76c" -H 
"Content-Type:application/octet-stream" -X GET 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-9d79-639a6853ed14/image_file/download
 -i

  ACTUAL RESULT:
  HTTP/1.1 500 Internal Server Error
  Content-Type: text/plain
  Content-Length: 0
  Date: Mon, 31 Aug 2015 10:43:51 GMT
  Connection: close

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1490511/+subscriptions


[Yahoo-eng-team] [Bug 1414559] Re: OVS drops RARP packets by QEMU upon live-migration - VM temporarily disconnected

2015-08-31 Thread sean mooney
marking as invalid as the bug cannot be reproduced.
please reopen if this is still an issue for you and you  can provide more info 
on how to recreate.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414559

Title:
  OVS drops RARP packets by QEMU upon live-migration - VM temporarily
  disconnected

Status in neutron:
  Invalid

Bug description:
  When live-migrating a VM the QEMU send 5 RARP packets in order to allow 
re-learning of the new location of the VM's MAC address.
  However the VIF creation scheme between nova-compute and neutron-ovs-agent 
drops these RARPs:
  1. nova creates a port on OVS but without the internal tagging. 
  2. At this stage all the packets that come out from the VM, or QEMU process 
it runs in, will be dropped.
  3. The QEMU sends five RARP packets in order to allow MAC learning. These 
packets are dropped as described in #2.
  4. In the meanwhile neutron-ovs-agent loops every POLLING_INTERVAL and scans 
for new ports. Once it detects a new port is added. it will read the properties 
of the new port, and assign the correct internal tag, that will allow 
connection of the VM.

  The flow above suggests that:
  1. RARP packets are dropped, so MAC learning takes much longer and depends on 
internal traffic and advertising by the VM.
  2. VM is disconnected from the network for a mean period of POLLING_INTERVAL/2

  Seems like this could be solved by direct messages between nova vif
  driver and neutron-ovs-agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490429] [NEW] glance image-show just returns 'id'

2015-08-31 Thread Rabi Mishra
Public bug reported:

glance -d image-show 31e0d3a0-c29d-49bc-bc71-ee8a3f11c693
curl -g -i -X HEAD -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 
'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: 
{SHA1}936d7b4cf9f2f0a3793e0ebb446a58ecd3d577aa' -H 'Content-Type: 
application/octet-stream' 
http://192.168.1.51:9292/v1/images/31e0d3a0-c29d-49bc-bc71-ee8a3f11c693

HTTP/1.1 200 OK
Content-Length: 0
X-Image-Meta-Id: 31e0d3a0-c29d-49bc-bc71-ee8a3f11c693
X-Image-Meta-Deleted: False
X-Image-Meta-Checksum: ee1eca47dc88f4879d8a229cc70a07c6
X-Image-Meta-Status: active
X-Image-Meta-Container_format: bare
X-Image-Meta-Protected: False
X-Image-Meta-Min_disk: 0
X-Image-Meta-Min_ram: 0
X-Image-Meta-Created_at: 2015-08-31T07:57:41.00
X-Image-Meta-Size: 13287936
Connection: keep-alive
Etag: ee1eca47dc88f4879d8a229cc70a07c6
X-Image-Meta-Is_public: True
Date: Mon, 31 Aug 2015 07:59:49 GMT
X-Image-Meta-Owner: 7cadb48541814309be95e0a977517b49
X-Image-Meta-Updated_at: 2015-08-31T07:57:41.00
Content-Type: text/html; charset=UTF-8
X-Openstack-Request-Id: req-e6434a2b-a014-4ae2-bcc3-07377afbc5a5
X-Image-Meta-Disk_format: qcow2
X-Image-Meta-Name: cirros-0.3.4

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line 667, in 
main
args.func(client, args)
  File "/usr/lib/python2.7/site-packages/glanceclient/v1/shell.py", line 142, 
in do_image_show
image_id = utils.find_resource(gc.images, args.image).id
  File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 491, in __getattr__
self.get()
  File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 509, in get
new = self.manager.get(self.id)
  File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 494, in __getattr__
raise AttributeError(k)
AttributeError: id
id


I don't  see any error in g-reg.log or g-api.log

** Affects: python-glanceclient
 Importance: Undecided
 Status: New

** Project changed: glance => python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1490429

Title:
  glance image-show just returns 'id'

Status in python-glanceclient:
  New

Bug description:
  glance -d image-show 31e0d3a0-c29d-49bc-bc71-ee8a3f11c693
  curl -g -i -X HEAD -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 
'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: 
{SHA1}936d7b4cf9f2f0a3793e0ebb446a58ecd3d577aa' -H 'Content-Type: 
application/octet-stream' 
http://192.168.1.51:9292/v1/images/31e0d3a0-c29d-49bc-bc71-ee8a3f11c693

  HTTP/1.1 200 OK
  Content-Length: 0
  X-Image-Meta-Id: 31e0d3a0-c29d-49bc-bc71-ee8a3f11c693
  X-Image-Meta-Deleted: False
  X-Image-Meta-Checksum: ee1eca47dc88f4879d8a229cc70a07c6
  X-Image-Meta-Status: active
  X-Image-Meta-Container_format: bare
  X-Image-Meta-Protected: False
  X-Image-Meta-Min_disk: 0
  X-Image-Meta-Min_ram: 0
  X-Image-Meta-Created_at: 2015-08-31T07:57:41.00
  X-Image-Meta-Size: 13287936
  Connection: keep-alive
  Etag: ee1eca47dc88f4879d8a229cc70a07c6
  X-Image-Meta-Is_public: True
  Date: Mon, 31 Aug 2015 07:59:49 GMT
  X-Image-Meta-Owner: 7cadb48541814309be95e0a977517b49
  X-Image-Meta-Updated_at: 2015-08-31T07:57:41.00
  Content-Type: text/html; charset=UTF-8
  X-Openstack-Request-Id: req-e6434a2b-a014-4ae2-bcc3-07377afbc5a5
  X-Image-Meta-Disk_format: qcow2
  X-Image-Meta-Name: cirros-0.3.4

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/glanceclient/shell.py", line 667, in 
main
  args.func(client, args)
File "/usr/lib/python2.7/site-packages/glanceclient/v1/shell.py", line 142, 
in do_image_show
  image_id = utils.find_resource(gc.images, args.image).id
File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 491, in __getattr__
  self.get()
File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 509, in get
  new = self.manager.get(self.id)
File 
"/usr/lib/python2.7/site-packages/glanceclient/openstack/common/apiclient/base.py",
 line 494, in __getattr__
  raise AttributeError(k)
  AttributeError: id
  id

  
  I don't  see any error in g-reg.log or g-api.log

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1490429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489860] Re: libvirtError: Unable to write to monitor: Broken pipe during snapshotting

2015-08-31 Thread Silvan Kaiser
*** This bug is a duplicate of bug 1489581 ***
https://bugs.launchpad.net/bugs/1489581

I'll mark this as a duplicate of 1489581 that i linked to earlier, since
that bugs fix has merged all is well.

** This bug has been marked a duplicate of bug 1489581
   test_create_ebs_image_and_check_boot is race failing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489860

Title:
   libvirtError: Unable to write to monitor: Broken pipe during
  snapshotting

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Our CI hits the following issue in about 30% of it's runs, shown in
  the nova compute log:

  2015-08-28 11:57:47.780 INFO nova.compute.manager 
[req-4eab5a49-fbee-4623-a14a-d1a4b52fb332 
tempest-TestVolumeBootPattern-771277616 
tempest-TestVolumeBootPattern-1634675604] [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] Terminating instance
  2015-08-28 11:57:48.933 ERROR nova.virt.libvirt.driver 
[req-1fb42c20-c1d5-462b-a3a8-8c1ff0f028fc nova service] [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] Unable to create VM snapshot, failing 
volume_snapshot operation.
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] Traceback (most recent call last):
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1753, in 
_volume_snapshot_create
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] 
domain.snapshotCreateXML(snapshot_xml, snap_flags)
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] rv = execute(f, *args, **kwargs)
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] six.reraise(c, e, tb)
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] rv = meth(*args, **kwargs)
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559]   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 2292, in 
snapshotCreateXML
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] if ret is None:raise 
libvirtError('virDomainSnapshotCreateXML() failed', dom=self)
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] libvirtError: Unable to write to monitor: 
Broken pipe
  2015-08-28 11:57:48.933 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] 
  2015-08-28 11:57:48.934 ERROR nova.virt.libvirt.driver 
[req-1fb42c20-c1d5-462b-a3a8-8c1ff0f028fc nova service] [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] Error occurred during 
volume_snapshot_create, sending error status to Cinder.
  2015-08-28 11:57:48.934 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] Traceback (most recent call last):
  2015-08-28 11:57:48.934 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1808, in 
volume_snapshot_create
  2015-08-28 11:57:48.934 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] volume_id, create_info['new_file'])
  2015-08-28 11:57:48.934 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1753, in 
_volume_snapshot_create
  2015-08-28 11:57:48.934 13341 ERROR nova.virt.libvirt.driver [instance: 
6436ffab-6706-484b-bb29-bafc9f1b6559] 
domain.snapshotCreateXML(snapshot_xml, snap_flags)
  2015-08-28 

[Yahoo-eng-team] [Bug 1490497] [NEW] pep8-incompliant filenames missing in gate console logs

2015-08-31 Thread Vivek Dhayaal
Public bug reported:

Jenkins reported gate-keystone-pep8 failure on patch set 12 @ 
https://review.openstack.org/#/c/209524/  .
But the console logs didn't contain the filenames that are incompliant with 
pep8.
http://logs.openstack.org/24/209524/12/check/gate-keystone-pep8/b2b7500/console.html

...
2015-08-30 22:34:11.101 | pep8 runtests: PYTHONHASHSEED='3894393079'
2015-08-30 22:34:11.102 | pep8 runtests: commands[0] | flake8
2015-08-30 22:34:11.102 |   /home/jenkins/workspace/gate-keystone-pep8$ 
/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8 
2015-08-30 22:34:16.619 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8'
2015-08-30 22:34:16.620 | ___ summary 

2015-08-30 22:34:16.620 | ERROR:   pep8: commands failed
...


Typically, it contains the filenames as well.
Eg. Console logs pf patchset 1 contains the filenames.
http://logs.openstack.org/24/209524/1/check/gate-keystone-pep8/19f2885/console.html

...
2015-08-05 14:45:15.247 | pep8 runtests: PYTHONHASHSEED='1879982710'
2015-08-05 14:45:15.247 | pep8 runtests: commands[0] | flake8
2015-08-05 14:45:15.247 |   /home/jenkins/workspace/gate-keystone-pep8$ 
/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8 
2015-08-05 14:45:20.518 | ./keystone/assignment/backends/ldap.py:37:5: E301 
expected 1 blank line, found 0
2015-08-05 14:45:20.518 | @versionutils.deprecated(
2015-08-05 14:45:20.518 | ^
...
2015-08-05 14:45:20.872 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8'
2015-08-05 14:45:20.872 | ___ summary 

2015-08-05 14:45:20.873 | ERROR:   pep8: commands failed
...


** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1490497

Title:
  pep8-incompliant filenames missing in gate console logs

Status in Keystone:
  New

Bug description:
  Jenkins reported gate-keystone-pep8 failure on patch set 12 @ 
https://review.openstack.org/#/c/209524/  .
  But the console logs didn't contain the filenames that are incompliant with 
pep8.
  
http://logs.openstack.org/24/209524/12/check/gate-keystone-pep8/b2b7500/console.html
  
  ...
  2015-08-30 22:34:11.101 | pep8 runtests: PYTHONHASHSEED='3894393079'
  2015-08-30 22:34:11.102 | pep8 runtests: commands[0] | flake8
  2015-08-30 22:34:11.102 |   /home/jenkins/workspace/gate-keystone-pep8$ 
/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8 
  2015-08-30 22:34:16.619 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8'
  2015-08-30 22:34:16.620 | ___ summary 

  2015-08-30 22:34:16.620 | ERROR:   pep8: commands failed
  ...
  

  Typically, it contains the filenames as well.
  Eg. Console logs pf patchset 1 contains the filenames.
  
http://logs.openstack.org/24/209524/1/check/gate-keystone-pep8/19f2885/console.html
  
  ...
  2015-08-05 14:45:15.247 | pep8 runtests: PYTHONHASHSEED='1879982710'
  2015-08-05 14:45:15.247 | pep8 runtests: commands[0] | flake8
  2015-08-05 14:45:15.247 |   /home/jenkins/workspace/gate-keystone-pep8$ 
/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8 
  2015-08-05 14:45:20.518 | ./keystone/assignment/backends/ldap.py:37:5: E301 
expected 1 blank line, found 0
  2015-08-05 14:45:20.518 | @versionutils.deprecated(
  2015-08-05 14:45:20.518 | ^
  ...
  2015-08-05 14:45:20.872 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-keystone-pep8/.tox/pep8/bin/flake8'
  2015-08-05 14:45:20.872 | ___ summary 

  2015-08-05 14:45:20.873 | ERROR:   pep8: commands failed
  ...
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1490497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490495] [NEW] lots of invalid IPNetwork None/31 in tests

2015-08-31 Thread Miguel Angel Ajo
Public bug reported:

A high spike during the last 24h, under investigation.

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW52YWxpZCBJUE5ldHdvcmsgTm9uZS8zMVwiIiwiZmllbGRzIjpbImJ1aWxkX2NoYW5nZSJdLCJvZmZzZXQiOjEwMDAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQ0MTAxNjM1ODI1Nn0=

http://bit.ly/1hr8blF

** Affects: neutron
 Importance: Critical
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490495

Title:
  lots of invalid IPNetwork None/31 in tests

Status in neutron:
  Confirmed

Bug description:
  A high spike during the last 24h, under investigation.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW52YWxpZCBJUE5ldHdvcmsgTm9uZS8zMVwiIiwiZmllbGRzIjpbImJ1aWxkX2NoYW5nZSJdLCJvZmZzZXQiOjEwMDAsInRpbWVmcmFtZSI6Ijg2NDAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQ0MTAxNjM1ODI1Nn0=

  http://bit.ly/1hr8blF

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490505] [NEW] Glance create element to blob with null property when we remove null element to blob property

2015-08-31 Thread dshakhray
Public bug reported:

ENVIRONMENT: devstack, Glance (master, 30.08.2015)

STEPS TO REPRODUCE:
We tried remove element to blob property on artifact.
We have the artifact:
{"description": null, "published_at": null, "tags": [], "depends_on": null, 
"created_at": "2015-08-31T10:30:24.00", "type_name": "MyArtifact", 
"updated_at": "2015-08-31T10:30:24.00", "visibility": "private", "id": 
"3f931cb3-8715-4dff-9d79-639a6853ed14", "type_version": "2.0", "state": 
"creating", "version": "11.0.0", "references": [], "prop1": null, "prop2": 
null, "owner": "a82a48dc05df447baab0afe1770c2be8", "image_file": null, 
"deleted_at": null, "screenshots": [], "int_list": null, "name": "art"}

Send request:
curl -H "X-Auth-Token: 7df867f7b5ae42778b355a74b6d5d76c" -H 
"Content-Type:application/octet-stream" -X PATCH -d '[{"op": "remove", "path": 
"/image_file"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-9d79-639a6853ed14
 -i

EXPECTED RESULT:
HTTP/1.1 204 OK
Content-Length: 640
Content-Type: application/json; charset=UTF-8
X-Openstack-Request-Id: req-42927006-cdaa-4a67-9143-8427830a1228
Date: Mon, 31 Aug 2015 10:34:39 GMT

{"description": null, "published_at": null, "tags": [], "depends_on":
null, "created_at": "2015-08-31T10:30:24.00", "type_name":
"MyArtifact", "updated_at": "2015-08-31T10:30:24.00", "visibility":
"private", "id": "3f931cb3-8715-4dff-9d79-639a6853ed14", "type_version":
"2.0", "state": "creating", "version": "11.0.0", "references": [],
"prop1": null, "prop2": null, "owner":
"a82a48dc05df447baab0afe1770c2be8", "image_file": {"checksum": null,
"download_link": "/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-
9d79-639a6853ed14/image_file/download", "size": 0}, "deleted_at": null,
"screenshots": [], "int_list": null, "name": "art"}

ACTUAL RESULT:
HTTP/1.1 200 OK
Content-Length: 640
Content-Type: application/json; charset=UTF-8
X-Openstack-Request-Id: req-42927006-cdaa-4a67-9143-8427830a1228
Date: Mon, 31 Aug 2015 10:34:39 GMT

{"description": null, "published_at": null, "tags": [], "depends_on":
null, "created_at": "2015-08-31T10:30:24.00", "type_name":
"MyArtifact", "updated_at": "2015-08-31T10:30:24.00", "visibility":
"private", "id": "3f931cb3-8715-4dff-9d79-639a6853ed14", "type_version":
"2.0", "state": "creating", "version": "11.0.0", "references": [],
"prop1": null, "prop2": null, "owner":
"a82a48dc05df447baab0afe1770c2be8", "image_file": {"checksum": null,
"download_link": "/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-
9d79-639a6853ed14/image_file/download", "size": 0}, "deleted_at": null,
"screenshots": [], "int_list": null, "name": "art"}

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1490505

Title:
  Glance create element to blob with null property when we remove null
  element to blob property

Status in Glance:
  New

Bug description:
  ENVIRONMENT: devstack, Glance (master, 30.08.2015)

  STEPS TO REPRODUCE:
  We tried remove element to blob property on artifact.
  We have the artifact:
  {"description": null, "published_at": null, "tags": [], "depends_on": null, 
"created_at": "2015-08-31T10:30:24.00", "type_name": "MyArtifact", 
"updated_at": "2015-08-31T10:30:24.00", "visibility": "private", "id": 
"3f931cb3-8715-4dff-9d79-639a6853ed14", "type_version": "2.0", "state": 
"creating", "version": "11.0.0", "references": [], "prop1": null, "prop2": 
null, "owner": "a82a48dc05df447baab0afe1770c2be8", "image_file": null, 
"deleted_at": null, "screenshots": [], "int_list": null, "name": "art"}

  Send request:
  curl -H "X-Auth-Token: 7df867f7b5ae42778b355a74b6d5d76c" -H 
"Content-Type:application/octet-stream" -X PATCH -d '[{"op": "remove", "path": 
"/image_file"}]' 
http://172.18.76.44:9292/v3/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-9d79-639a6853ed14
 -i

  EXPECTED RESULT:
  HTTP/1.1 204 OK
  Content-Length: 640
  Content-Type: application/json; charset=UTF-8
  X-Openstack-Request-Id: req-42927006-cdaa-4a67-9143-8427830a1228
  Date: Mon, 31 Aug 2015 10:34:39 GMT

  {"description": null, "published_at": null, "tags": [], "depends_on":
  null, "created_at": "2015-08-31T10:30:24.00", "type_name":
  "MyArtifact", "updated_at": "2015-08-31T10:30:24.00",
  "visibility": "private", "id": "3f931cb3-8715-4dff-9d79-639a6853ed14",
  "type_version": "2.0", "state": "creating", "version": "11.0.0",
  "references": [], "prop1": null, "prop2": null, "owner":
  "a82a48dc05df447baab0afe1770c2be8", "image_file": {"checksum": null,
  "download_link": "/artifacts/myartifact/v2.0/3f931cb3-8715-4dff-
  9d79-639a6853ed14/image_file/download", "size": 0}, "deleted_at":
  null, "screenshots": [], "int_list": null, "name": "art"}

  ACTUAL RESULT:
  HTTP/1.1 200 OK
  Content-Length: 640
  Content-Type: application/json; charset=UTF-8
  

[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2015-08-31 Thread Pranali Deore
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Pranali Deore (pranali-deore)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Manila:
  In Progress
Status in OpenStack Compute (nova):
  New
Status in python-ceilometerclient:
  In Progress
Status in python-cinderclient:
  Fix Released
Status in Sahara:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1329192] Re: nova can not stop a vm when using fake driver

2015-08-31 Thread Noel Nelson Dsouza
Using Fake driver not able to launch instances. If we launch also it
should be in error state.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1329192

Title:
  nova can not stop a vm when using fake driver

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When using fake driver, "nova stop" command did not change the
  instance's power state.

  Steps to reproduce the bug:
  (1) Config nova to use fake drive
  vim /etc/nova/nova.conf
  compute_driver = fake.FakeDriver

  (2) Boot a VM
  
+--+---++--+--++
  | ID 
| Name  |  Status   |  Task State   | Power State |  Networks   
 |
  
+--+---++--+--++
  | 23f4abf8-664b-4aff-bf9e-fdad35cc7a9d | test |   ACTIVE |
-  | Running | test-net=55.0.0.125  |
  
+--+---++--+--++

  (3) Stop this instance with "nova stop" command
  $> nova stop test
  $> nova list
  
+--+---++--+--++
  | ID 
| Name  |  Status   |  Task State   | Power State |  Networks   
 |
  
+--+---++--+--++
  | 23f4abf8-664b-4aff-bf9e-fdad35cc7a9d | test |  SHUTOFF  |-  
| Running | test-net=55.0.0.125  |
  
+--+---++--+--++

  (4) We can see the power state is still "Running", while it should be
  “Showdown".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1329192/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490423] [NEW] Router rules _get_subnet_connectivity cidr check throws exception

2015-08-31 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The check for wether a cidr is affected by a rule in
_get_subnet_connectivity are subject to fail and throw exception if any
of the values it checks are None.

This currently breaks the Horizon gate/integration tests.

New version of the check proposed.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
Router rules _get_subnet_connectivity cidr check throws exception
https://bugs.launchpad.net/bugs/1490423
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486383] Re: the neutron service can't be start with multi same service_plugins variable

2015-08-31 Thread Armando Migliaccio
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1486383

Title:
  the neutron service can't be start with multi same service_plugins
  variable

Status in neutron:
  Won't Fix

Bug description:
  when  service_plugins is configured with multi same value,neutron-server 
cannot be started,
  service_plugins is assigned with much variable,the same variable should be 
ignored instead of raise error,
  for the error is not important .

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1486383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490815] [NEW] subnetallcationpool should not be an extension

2015-08-31 Thread yong sheng gong
Public bug reported:

look at 
https://github.com/openstack/neutron/blob/master/neutron/extensions/subnetallocation.py,
which defines an extension Subnetallocation but defines no extension resource 
at all. Actually, it is implemented
in core resource.
So I think we should remove this extension.

** Affects: neutron
 Importance: Undecided
 Assignee: yong sheng gong (gongysh)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yong sheng gong (gongysh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490815

Title:
  subnetallcationpool should not be an extension

Status in neutron:
  New

Bug description:
  look at 
https://github.com/openstack/neutron/blob/master/neutron/extensions/subnetallocation.py,
  which defines an extension Subnetallocation but defines no extension resource 
at all. Actually, it is implemented
  in core resource.
  So I think we should remove this extension.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490815/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490581] [NEW] the items will never be deleted from metering_info

2015-08-31 Thread Sergey Vilgelm
Public bug reported:

The function _purge_metering_info of MeteringAgent class has a bug. The items 
of metering_info dictionary will never be deleted:
if info['last_update'] > ts + report_interval:
del self.metering_info[label_id]
I this situation last_update will always be less than current timestamp.
Also this function is not covered by the unit tests.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: metering

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490581

Title:
  the items will never be deleted from metering_info

Status in neutron:
  New

Bug description:
  The function _purge_metering_info of MeteringAgent class has a bug. The items 
of metering_info dictionary will never be deleted:
  if info['last_update'] > ts + report_interval:
  del self.metering_info[label_id]
  I this situation last_update will always be less than current timestamp.
  Also this function is not covered by the unit tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490559] [NEW] admin not able to set 'maintenance' state for volumes

2015-08-31 Thread Masco Kaliyamoorthy
Public bug reported:

admin not able to set the status as 'maintenance' from horizon since the
the options doesn't contain.

** Affects: horizon
 Importance: Undecided
 Assignee: Masco Kaliyamoorthy (masco)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Masco Kaliyamoorthy (masco)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1490559

Title:
  admin not able to set 'maintenance' state for volumes

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  admin not able to set the status as 'maintenance' from horizon since
  the the options doesn't contain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1490559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490516] Re: security group cannot be applied to a specified port of an instance but only to all ports.

2015-08-31 Thread Markus Zoeller (markus_z)
@javeme:

It seems that this is a feature request. Feature requests for nova are
done with blueprints [1] and with specs [2]. I'll recommend to read [3]
if not yet done. To focus here on bugs which are a failures/errors/faults
I close this one as "Invalid". The effort to implement the requested
feature is then driven only by the blueprint (and spec).

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Tags added: api security-group

** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490516

Title:
  security group cannot be applied to a specified port of an instance
  but only to all ports.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Security group cannot be applied to a specified port of an instance,
  but only to all ports.

  In general [1], we just want to add a security group to a specified
  port, but this is not supported by the API 'addSecurityGroup'.

  We should allow that different security groups are applied to
  different ports of an instance.

  [1] such as an instance with an outside port and an inside port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1490516/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490528] [NEW] Strange logic for shared images for different tenants/users

2015-08-31 Thread Timur Nurlygayanov
Public bug reported:

Nova version: 1:2015.1.1-1 (OpenStack Kilo release)

Steps To Reproduce:
1. Login to OpenStack horizon dashboard as admin user
2. Upload Ubuntu cloud image into Glance
3. Boot VM 'test1' from Ubuntu image
4. Install stress tool on the VM: 'sudo apt-get install stress'
5. Add 'stress' tool in autorun: 'sudo echo "stress --cpu 10 &" > /etc/rc.local'
6. Reboot VM 'test1'
7. Make a snapshot of 'test1' VM
8. Mark this snapshot as 'public' in Glance
9. Create 10 VMs with this image (snapshot) from admin user. All VMs became 
Active in several seconds.
10. Login as non-admin user in another tenant (for example, user 'test' in 
tenant 'my-project')
11. Boot 10 VMs with public image 'TestImage'

Expected Result:
VMs will start as quickly as for the admin tenant.

Observed Result:
VMs hang in "Downloading Image" operation, it takes more than 20 minutes to run 
VM from the snapshot in other tenants (but in the same time it takes a few 
seconds to run VMs from the same image from tenant where we have created the 
image).

Looks like Nova tries to copy this image for each new tenant, and it
doesn't looks correct.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490528

Title:
  Strange logic for shared images for different tenants/users

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova version: 1:2015.1.1-1 (OpenStack Kilo release)

  Steps To Reproduce:
  1. Login to OpenStack horizon dashboard as admin user
  2. Upload Ubuntu cloud image into Glance
  3. Boot VM 'test1' from Ubuntu image
  4. Install stress tool on the VM: 'sudo apt-get install stress'
  5. Add 'stress' tool in autorun: 'sudo echo "stress --cpu 10 &" > 
/etc/rc.local'
  6. Reboot VM 'test1'
  7. Make a snapshot of 'test1' VM
  8. Mark this snapshot as 'public' in Glance
  9. Create 10 VMs with this image (snapshot) from admin user. All VMs became 
Active in several seconds.
  10. Login as non-admin user in another tenant (for example, user 'test' in 
tenant 'my-project')
  11. Boot 10 VMs with public image 'TestImage'

  Expected Result:
  VMs will start as quickly as for the admin tenant.

  Observed Result:
  VMs hang in "Downloading Image" operation, it takes more than 20 minutes to 
run VM from the snapshot in other tenants (but in the same time it takes a few 
seconds to run VMs from the same image from tenant where we have created the 
image).

  Looks like Nova tries to copy this image for each new tenant, and it
  doesn't looks correct.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1490528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490547] [NEW] Xen Tools: scripts failing due to implicit import of CONF object

2015-08-31 Thread Sulochan Acharya
Public bug reported:

Some xen tools are failing because some CONF objects seem to be imported
implicitly though import paths rather than though import_opt. For
instance calling destroy_cached_images.py (SOME FIX PENDING, see :
https://review.openstack.org/#/c/209526 )

will result in:

xenapi = xenapi_driver.XenAPIDriver(virtapi.VirtAPI())
  File "nova/virt/xenapi/driver.py", line 130, in __init__
self._vmops = vmops.VMOps(self._session, self.virtapi)
  File "/nova/virt/xenapi/vmops.py", line 226, in __init__
self.compute_api = compute.API()
  File "/nova/compute/__init__.py", line 39, in API
return importutils.import_object(class_name, *args, **kwargs)
  File "/oslo_utils/importutils.py", line 38, in import_object
return import_class(import_str)(*args, **kwargs)
  File "/nova/compute/api.py", line 304, in __init__
self.servicegroup_api = servicegroup.API()
  File "/nova/servicegroup/api.py", line 55, in __init__
report_interval = CONF.report_interval
  File "/oslo_config/cfg.py", line 1896, in __getattr__
raise NoSuchOptError(name)
oslo_config.cfg.NoSuchOptError: no such option: report_interval


However, adding CONF.import_opt('report_interval', 'nova.service')

will result in a trace and:

ImportError: cannot import name conductor

** Affects: nova
 Importance: Undecided
 Assignee: Sulochan Acharya (sulochan-acharya)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Sulochan Acharya (sulochan-acharya)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490547

Title:
  Xen Tools: scripts failing due to implicit import of CONF object

Status in OpenStack Compute (nova):
  New

Bug description:
  Some xen tools are failing because some CONF objects seem to be
  imported implicitly though import paths rather than though import_opt.
  For instance calling destroy_cached_images.py (SOME FIX PENDING, see :
  https://review.openstack.org/#/c/209526 )

  will result in:

  xenapi = xenapi_driver.XenAPIDriver(virtapi.VirtAPI())
File "nova/virt/xenapi/driver.py", line 130, in __init__
  self._vmops = vmops.VMOps(self._session, self.virtapi)
File "/nova/virt/xenapi/vmops.py", line 226, in __init__
  self.compute_api = compute.API()
File "/nova/compute/__init__.py", line 39, in API
  return importutils.import_object(class_name, *args, **kwargs)
File "/oslo_utils/importutils.py", line 38, in import_object
  return import_class(import_str)(*args, **kwargs)
File "/nova/compute/api.py", line 304, in __init__
  self.servicegroup_api = servicegroup.API()
File "/nova/servicegroup/api.py", line 55, in __init__
  report_interval = CONF.report_interval
File "/oslo_config/cfg.py", line 1896, in __getattr__
  raise NoSuchOptError(name)
  oslo_config.cfg.NoSuchOptError: no such option: report_interval

  
  However, adding CONF.import_opt('report_interval', 'nova.service')

  will result in a trace and:

  ImportError: cannot import name conductor

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1490547/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp