[Yahoo-eng-team] [Bug 1364649] Re: Calendar widget does not display in front of modals

2016-05-20 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1364649

Title:
  Calendar widget does not display in front of modals

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  When the calendar widget is used on a field that is displayed in a
  modal, the widget displays behind the modal so you can't see it.  None
  of the fields that use a calendar widget is currently displayed on a
  modal so this has not been an issue.  I encountered the problem when
  extending horizon with a custom panel that uses a modal with a
  calendar widget on it.

  To recreate, the datepicker must be initialized on modal init, and a
  date field needs to be added to a modal form.  For example I added the
  following fields to the Admin -> Create Network form.  The datepicker
  init code keys off the start and end field IDs so it's easiest to just
  use those names.

  start = forms.DateField(label=_("Start date"), required=False,
   input_formats=("%Y-%m-%d",),
   widget=forms.DateInput(attrs={
   'data-date-format': '-mm-dd'}))
  end = forms.DateField(label=_("End date"), required=True,
input_formats=("%Y-%m-%d",),
 widget=forms.DateInput(attrs={
 'data-date-format': '-mm-dd'}))

  Initialize the datepicker on modal init somewhere in the javascript:

  horizon.modals.addModalInitFunction(horizon.forms.datepicker);

  When you open the Create Network form and click in the Start date or
  End date fields you can see the calendar partly showing underneath the
  modal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1364649/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583844] Re: pecan exception handler inconsistent with legacy exception handler

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/319015
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a1c194cf060b2d563beb36261c7702950446756f
Submitter: Jenkins
Branch:master

commit a1c194cf060b2d563beb36261c7702950446756f
Author: Kevin Benton 
Date:   Fri May 13 14:29:18 2016 -0700

Make exception translation common and add to pecan

This moves the exception translation logic from the
legacy api.v2.resource module to a common module and
re-uses it from pecan to bring consistency to the
way language and exception translation is handled
between the two.

This also adjusts the policy enforcement hook to correctly
handle the unsupported method case since 'after' hooks
are executed after 'on_error' hooks that return an exception.

Closes-Bug: #1583844
Change-Id: If3c2d8c94ca6c1615f3b909accf0f718e320d1c2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583844

Title:
  pecan exception handler inconsistent with legacy exception handler

Status in neutron:
  Fix Released

Bug description:
  the exception handler for pecan is missing logic for handling
  NotImplemented error, language translation, and error logging that is
  present in the legacy framework, which results in inconsistent error
  results for the user depending on the API used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583720] Re: "Migration instance not found" is logged repeatedly to nova-compute.log after an instance was deleted

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/318832
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e5b8060c08bb972a1960575749f240d7160bc114
Submitter: Jenkins
Branch:master

commit e5b8060c08bb972a1960575749f240d7160bc114
Author: Dan Smith 
Date:   Thu May 19 11:00:17 2016 -0700

Completed migrations are not "in progress"

This fixes an issue where we consider migrations that have been completed
as "in progress" by not filtering them out of the result set from
migration_get_in_progress_by_host_and_node(). This adds that state to
the list and adds a migration in that state to the test case. If it
is not filtered, the counts won't line up and the test will fail.

Change-Id: I7aafab9abdbfafe9479846f06068ba8a963d290a
Closes-Bug: #1583720


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583720

Title:
  "Migration instance not found" is logged repeatedly to nova-
  compute.log after an instance was deleted

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  After migrating an instance and then deleting the instance, a
  migration record of status "completed" is not filtered from a query of
  in-progress migrations. This causes the following to be logged
  periodically forever:

  
   2016-04-24 21:19:39.652 24323 DEBUG nova.compute.resource_tracker [req-...] 
Migration instance not 
   found: Instance 585ac641-... could not be found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583720/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582376] Re: setting user's default_project_id to a domain ID yield HTTP 400 instead of unscoped token

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/317792
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=8a7133f9506e0675ee5e5da9372d9be671eaaddf
Submitter: Jenkins
Branch:master

commit 8a7133f9506e0675ee5e5da9372d9be671eaaddf
Author: Guang Yee 
Date:   Tue May 17 18:10:59 2016 -0700

make sure default_project_id is not domain on user creation and update

Make sure user cannot accidentially set the default_project_id to a 
domain_id.
Invalid default_project_id is still allowed for backward compatibility.

Change-Id: I7dd33fdc299fa465333ca1d18819ef0537752f16
Closes-Bug: 1582376


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1582376

Title:
  setting user's default_project_id to a domain ID yield HTTP 400
  instead of unscoped token

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Per spec, if user's default_project_id is invalid (i.e. either it is
  bogus, disabled, or user have no roles assigned on it), it should be
  ignored at token request. In otherwise, it should result in an
  unscoped token.

  With the domain-is-project changes recently, if you accidentally set
  the user's default_project_id to a domain_id, you will get an HTTP 400
  on token request.

  Steps to reproduce:

  1. set the user default_project_id to an existing domain_id
  2. on token request, HTTP 400 is returned

  $ curl -k -d '{"auth":{"identity": 
{"methods":["password"],"password":{"user": {"name": "foo","password": 
"bar","domain":{"id":"default"}}' -H "Content-type: application/json" 
http://10.0.2.15:5000/v3/auth/tokens |python -mjson.tool
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100   258  100   101  100   157229357 --:--:-- --:--:-- --:--:--   357
  {
  "error": {
  "code": 400,
  "message": "obj
  ect of type 'NoneType' has no len()",
  "title": "Bad Request"
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1582376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583346] Re: eslint produces quote-prop warnings

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/318337
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f0a11af56aff211e7dc5110414359bcd6d985854
Submitter: Jenkins
Branch:master

commit f0a11af56aff211e7dc5110414359bcd6d985854
Author: Matt Borland 
Date:   Wed May 18 15:02:55 2016 -0600

Disabling warnings of 'quote-props'

Disables warnings of type 'quote-props' as discussed in a recent IRC 
meeting.

Change-Id: I36fd59dc639f93997bd9401e622f8ddfc662145e
Closes-Bug: 1583346


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1583346

Title:
  eslint produces quote-prop warnings

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  eslint currently produces quote-prop warnings, that is complaining
  that:

  {a: 'apple'}

  should be

  {'a': 'apple'}

  On IRC we just decided that this warning is not something we want to
  follow in Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1583346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584236] [NEW] Empty /etc/machine-id file fails instance creation

2016-05-20 Thread Pooja Ghumre
Public bug reported:

Description
===
If the /etc/machine-id file is present but empty for some reason, instance 
creation fails if the sysinfo_serial config option is set to 'auto' (default). 
The _get_host_sysinfo_serial_os function in libvirt driver treats empty file 
the same as missing file and throws NovaException in both cases. This change 
was made as part of https://bugs.launchpad.net/nova/+bug/1475353.
However, if the sysinfo_serial config option is set to 'auto', the driver only 
checks for the presence of file when choosing to report 'os' serial. If the 
file is present but empty, this throws NovaException instead of falling back to 
reporting 'hardware' serial instead.

Filing this bug to treat empty and missing file as the same in
_get_host_sysinfo_serial_auto function in libvirt driver (similar to
_get_host_sysinfo_serial_os).

Steps to reproduce
==
1. Empty the contents of /etc/machine-id file on a KVM host.
2. Set sysinfo_serial option in nova.conf to 'auto' (default).
3. Try to create an instance.

Expected result
===
Instance creation should be successful.

Actual result
=
Instance creation failed with error: 
NovaException: Unable to get host UUID: /etc/machine-id is empty

Environment
===
Openstack Liberty
Hypervisor - Libvirt + KVM
Storage - Local disk
Networking - Nova-network

** Affects: nova
 Importance: Undecided
 Assignee: Pooja Ghumre (pooja-9)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Pooja Ghumre (pooja-9)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1584236

Title:
  Empty /etc/machine-id file fails instance creation

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  If the /etc/machine-id file is present but empty for some reason, instance 
creation fails if the sysinfo_serial config option is set to 'auto' (default). 
The _get_host_sysinfo_serial_os function in libvirt driver treats empty file 
the same as missing file and throws NovaException in both cases. This change 
was made as part of https://bugs.launchpad.net/nova/+bug/1475353.
  However, if the sysinfo_serial config option is set to 'auto', the driver 
only checks for the presence of file when choosing to report 'os' serial. If 
the file is present but empty, this throws NovaException instead of falling 
back to reporting 'hardware' serial instead.

  Filing this bug to treat empty and missing file as the same in
  _get_host_sysinfo_serial_auto function in libvirt driver (similar to
  _get_host_sysinfo_serial_os).

  Steps to reproduce
  ==
  1. Empty the contents of /etc/machine-id file on a KVM host.
  2. Set sysinfo_serial option in nova.conf to 'auto' (default).
  3. Try to create an instance.

  Expected result
  ===
  Instance creation should be successful.

  Actual result
  =
  Instance creation failed with error: 
  NovaException: Unable to get host UUID: /etc/machine-id is empty

  Environment
  ===
  Openstack Liberty
  Hypervisor - Libvirt + KVM
  Storage - Local disk
  Networking - Nova-network

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1584236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584237] [NEW] Support local validators

2016-05-20 Thread Armando Migliaccio
Public bug reported:

neutron-lib contains a number of API validators. If a new feature needs
a new validator, there are two options available today:

a) contribute the validator to neutron-lib/api and pull it down: this makes 
sense only if the validator can be useful across a number of projects;
b) contribute the validator locally to the project of interest and modify the 
module variable neutron_lib.api.validators.validators with the local validator 
reference. This is definitely hack-ish and should be frowned upon.

For this reason, it would be nice to have a registration mechanism for
local validators.

** Affects: neutron
 Importance: Wishlist
 Status: Confirmed


** Tags: lib low-hanging-fruit

** Changed in: neutron
   Status: New => Confirmed

** Tags added: lib

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584237

Title:
  Support local validators

Status in neutron:
  Confirmed

Bug description:
  neutron-lib contains a number of API validators. If a new feature
  needs a new validator, there are two options available today:

  a) contribute the validator to neutron-lib/api and pull it down: this makes 
sense only if the validator can be useful across a number of projects;
  b) contribute the validator locally to the project of interest and modify the 
module variable neutron_lib.api.validators.validators with the local validator 
reference. This is definitely hack-ish and should be frowned upon.

  For this reason, it would be nice to have a registration mechanism for
  local validators.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1584237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584218] [NEW] NG Image edit removes metadata

2016-05-20 Thread Travis Tripp
Public bug reported:

Screen shot flow: http://imgur.com/a/5tPlU

Create an image and include metadata from the glance catalog.

e.g. CPU pinning = shared

After creation, expand the row and you'll see the info about cpu
pinning.

Edit the image, but don't change any values.  Just click update
(arguably that button should see if the forms are dirty).

Reload and you'll see that expanding the row the metadata is gone.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: angularjs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1584218

Title:
  NG Image edit removes metadata

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Screen shot flow: http://imgur.com/a/5tPlU

  Create an image and include metadata from the glance catalog.

  e.g. CPU pinning = shared

  After creation, expand the row and you'll see the info about cpu
  pinning.

  Edit the image, but don't change any values.  Just click update
  (arguably that button should see if the forms are dirty).

  Reload and you'll see that expanding the row the metadata is gone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1584218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584204] Re: VersionsCallbackNotFound exception when using QoS

2016-05-20 Thread John Kasperski
Proposed patch:  https://review.openstack.org/#/c/319444/

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => John Kasperski (jckasper)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584204

Title:
  VersionsCallbackNotFound exception when using QoS

Status in networking-ovn:
  Confirmed
Status in neutron:
  New

Bug description:
  VersionsCallbackNotFound exception occurred in neutron-server running
  networking-ovn when trying to enable QoS with the following commands:

  $ neutron qos-policy-create bw-limiter

  $ neutron qos-bandwidth-limit-rule-create bw-limiter --max-kbps 3000
  --max-burst-kbps 300

  Note:  This exception occurred when running core plugin or ML2 mech
  driver.

  
  2016-05-20 09:41:36.789 27596 DEBUG oslo_policy.policy 
[req-0fe76c74-76a6-43b3-8f5b-4d85a65aec7b admin -] Reloaded policy file: 
/etc/neutron/policy.json _load_policy_file 
/usr/local/lib/python2.7/dist-packages/oslo_policy/policy.py:520
  2016-05-20 09:41:36.954 27596 INFO neutron.wsgi 
[req-0fe76c74-76a6-43b3-8f5b-4d85a65aec7b admin -] 192.168.56.10 - - 
[20/May/2016 09:41:36] "GET /v2.0/qos/policies.json?fields=id=bw-limiter 
HTTP/1.1" 200 260 0.368297
  2016-05-20 09:41:37.031 27596 DEBUG neutron.api.v2.base 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] Request body: 
{u'bandwidth_limit_rule': {u'max_kbps': u'3000', u'max_burst_kbps': u'300'}} 
prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:658
  2016-05-20 09:41:37.031 27596 DEBUG neutron.api.v2.base 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] Unknown quota resources 
['bandwidth_limit_rule']. _create /opt/stack/neutron/neutron/api/v2/base.py:460
  2016-05-20 09:41:37.056 27596 DEBUG neutron.api.rpc.handlers.resources_rpc 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] 
neutron.api.rpc.handlers.resources_rpc.ResourcesPushRpcApi method push called 
with arguments (, 
QosPolicy(description='',id=dbee9581-44a5-4889-bd06-9193eb08c10d,name='bw-limiter',rules=[QosRule(7317f86e-bacb-4c6c-9221-66e2f9d9309d)],shared=False,tenant_id=7c291c3d9d1a45dd89c8c80c7f5f12b0),
 'updated') {} wrapper 
/usr/local/lib/python2.7/dist-packages/oslo_log/helpers.py:47
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
[req-c50967a6-838f-4da8-adab-9a44e7c7c207 admin -] create failed
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 84, in resource
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 412, in create
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in 
__exit__
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 523, in _create
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource obj = 
do_create(body)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 505, in do_create
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
request.context, reservation.reservation_id)
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in 
__exit__
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-05-20 09:41:37.056 27596 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
  2016-05-20 09:41:37.056 27596 ERROR 

[Yahoo-eng-team] [Bug 1582725] Re: cinder_policy.json action does not match the Cinder policy.json file

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/249379
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=388708b251b0487bb22fb3ebb8fcb36ee4ffdc4f
Submitter: Jenkins
Branch:master

commit 388708b251b0487bb22fb3ebb8fcb36ee4ffdc4f
Author: daniel-a-nguyen 
Date:   Tue Nov 24 10:59:24 2015 -0800

Updates horizon's copy of the cinder policy file

Change-Id: I7b83f0d97c330c9fe996fb752f6de57561295bde
Closes-Bug: 1582725
Co-Authored-By: Rob Cresswell 
Implements: blueprint update-cinder-policy-file


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1582725

Title:
  cinder_policy.json action does not match the Cinder policy.json file

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The horizon/openstack_dashboard/conf/cinder_policy.json actions do not match 
the policy action that are used by the Cinder component.
  Cinder uses "volume_extension:volume_actions:upload_public"
  and Horizon policy.json and code uses "volume:upload_to_image"

  This is the only miss match of policy action between the 2 components.
  This also does not allow a user of Cinder and Horizon to update the
  Cinder policy.json and copy it to the Horizon directly and have the
  button function according to Cinder policy.json rules.

  This can be missed as the Cinder policy.json file is update and the
  Horizon file is updated.

  I think that the action that the Horizon code is using should match it
  component that it is supporting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1582725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584209] [NEW] Neutron-LBaaS v2: PordID should be returned with Loadbalancer resource (API)

2016-05-20 Thread Davide Agnello
Public bug reported:

When creating a new loadbalancer with lbaas v2 (Octavia provider) and
would like to create a floating ip attached to the vip port for
loadbalancer.  Currently have to lookup the port id based on the ip
address associated with the loadbalancer.  It would greatly simplify the
workflow if the Port ID is returned in the loadbalancer API, similar to
vip API in lbaas v1.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584209

Title:
  Neutron-LBaaS v2: PordID should be returned with Loadbalancer resource
  (API)

Status in neutron:
  New

Bug description:
  When creating a new loadbalancer with lbaas v2 (Octavia provider) and
  would like to create a floating ip attached to the vip port for
  loadbalancer.  Currently have to lookup the port id based on the ip
  address associated with the loadbalancer.  It would greatly simplify
  the workflow if the Port ID is returned in the loadbalancer API,
  similar to vip API in lbaas v1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1584209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579924] Re: remove pecan_server.sh tool

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/314297
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b53a63c9e5914460a956cb3f1692ada61ff6ad2e
Submitter: Jenkins
Branch:master

commit b53a63c9e5914460a956cb3f1692ada61ff6ad2e
Author: Kevin Benton 
Date:   Fri May 6 18:06:25 2016 -0700

Remove tools/pecan_server.sh

This isn't used anymore and may not even work.
The server runs as part of the previous neutron-server binary
now with an option to control whether pecan or the legacy
code is used.

Change-Id: Ic5236ec305ca21a5e06b3149c7270eb2d62d2606
Closes-Bug: #1579924


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579924

Title:
  remove pecan_server.sh tool

Status in neutron:
  Fix Released

Bug description:
  neutron/tools/pecan_server.sh is not needed anymore.  It should be
  removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1579924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584199] [NEW] HyperV: Nova serial console access support

2016-05-20 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/145004
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit e215e6cba9922e98cb358891a3f9be2e809d770f
Author: Lucian Petrut 
Date:   Mon Jan 5 16:38:10 2015 +0200

HyperV: Nova serial console access support

Hyper-V provides a solid interface for accessing serial ports via
named pipes, already employed in the Nova serial console log
implementation.

This patch makes use of this interface by implementing a simple TCP
socket proxy, providing access to instance serial console ports.

DocImpact

Implements: blueprint hyperv-serial-ports

Change-Id: I58c328391a80ee8b81f66b2e09a1bfa4b26e584c

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1584199

Title:
  HyperV: Nova serial console access support

Status in OpenStack Compute (nova):
  New

Bug description:
  https://review.openstack.org/145004
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit e215e6cba9922e98cb358891a3f9be2e809d770f
  Author: Lucian Petrut 
  Date:   Mon Jan 5 16:38:10 2015 +0200

  HyperV: Nova serial console access support
  
  Hyper-V provides a solid interface for accessing serial ports via
  named pipes, already employed in the Nova serial console log
  implementation.
  
  This patch makes use of this interface by implementing a simple TCP
  socket proxy, providing access to instance serial console ports.
  
  DocImpact
  
  Implements: blueprint hyperv-serial-ports
  
  Change-Id: I58c328391a80ee8b81f66b2e09a1bfa4b26e584c

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1584199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584169] [NEW] No support for Swift Temp URLs in glance_store

2016-05-20 Thread Paul Durivage
Public bug reported:

Glance store currently lacks support for Swift storage temporary URLs.

Method parse_uri in glance_store._drivers.http.StoreLocation does not
inspect the pieces.query property and utilize this when defining the
path attribute, and therefore does not support Swift temporary URLs,
which make use of the URL query string.  The code begins here:
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/http.py#L80

I can provide a patch shortly.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1584169

Title:
  No support for Swift Temp URLs in glance_store

Status in Glance:
  New

Bug description:
  Glance store currently lacks support for Swift storage temporary URLs.

  Method parse_uri in glance_store._drivers.http.StoreLocation does not
  inspect the pieces.query property and utilize this when defining the
  path attribute, and therefore does not support Swift temporary URLs,
  which make use of the URL query string.  The code begins here:
  
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/http.py#L80

  I can provide a patch shortly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1584169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564575] Re: DVR router namespaces are deleted when we manually move a DVR router from one SNAT_node to another SNAT_node even though there are active VMs in the node

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300268
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=c198710dc551bc0f79851a7801038b033088a8c2
Submitter: Jenkins
Branch:master

commit c198710dc551bc0f79851a7801038b033088a8c2
Author: Swaminathan Vasudevan 
Date:   Thu Mar 31 17:48:09 2016 -0700

DVR: Moving router from dvr_snat node removes the qrouters

Removing the router from dvr_snat node removes the qrouters
that are servicing the VM and dhcp ports.

If there are still dvr serviceable ports in the dvr_snat node,
and if router remove command is executed to remove the router
association from the dvr_snat agent then only the snat
functionality should be moved to the the different agent
and the router namespaces should be untouched.

This patch checks if there are any dvr serviceable ports for
the dvr router and if exists, it will not send a router_remove
message to the agent, but instead will send an router_update
message to the agent.

Change-Id: I5a3ba329346ab0d5ea7b0296ec64cc8e5fb4056d
Closes-Bug: #1564575


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1564575

Title:
  DVR router namespaces are deleted when we manually move a DVR router
  from one SNAT_node to another SNAT_node even though there are active
  VMs in the node

Status in neutron:
  Fix Released

Bug description:
  DVR router namespaces are deleted when we manually move the router
  from on dvr_snat node to another dvr_snat node.

  It should be only deleting the snat_namespace and not the
  router_namespace, since there are 'dhcp' ports and 'vm' ports still
  serviced by DVR.

  How to reproduce:

  Configure a two node setup:

  1. I have one node with  Controller, compute and networking node with dhcp 
running in dvr_snat mode.
  2. I have another node with  compute and networking node without dhcp running 
in dvr_snat mode.
  3. Now create network
  4. Create a subnet
  5. Create a router and attach the subnet to the router.
  6. Also set a gateway to the router.
  7. Now you should see that there are three namespaces in the first node.
  a. snat_namespace
  b. qrouter_namespace
  c. dhcp_namespace
  8. Now create a VM on the first node.
  9. Now try to remove the router from the first agent and assign it to the 
second agent in the second node.
  neutron l3-agent-router-remove agent-id  router-id

  This currently removes both the snat_namespace and the
  router_namespace when there is still a valid vm and dhcp port.

  
  Suspect that checking for available DVR service ports might be causing an 
issue here.

  Will try to find out the root cause.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1564575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583391] Re: Swift UI delete of files with similar names breaks

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/318401
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=75d30cba488939d52a3d3df42d037cd00a09ed60
Submitter: Jenkins
Branch:master

commit 75d30cba488939d52a3d3df42d037cd00a09ed60
Author: Richard Jones 
Date:   Thu May 19 11:10:10 2016 +1000

Don't attempt to list the "folder" contents of Swift objects

Swift doesn't really have folders, it just has object listings
matching string prefixes. This will get confused if two objects
have the same string prefix name, so just don't do it.

Change-Id: Iab818cc965aab1470aa41ebebd5db7c50ed3836d
Fixes-Bug: 1583391


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1583391

Title:
  Swift UI delete of files with similar names breaks

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  If you attempt to delete two swift files (objects) which are named
  "spam" and "spammer" then the first will fail because Horizon's swift
  api code layer attempts to determine if the object has any folder
  contents. Yep. And because of the way swift "folders" are implemented
  (string prefix matching) the result from swift will be "yep, there's
  two matches for that prefix" so the Horizon code swift_delete_object()
  throws up a conflict error (folder not empty).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1583391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584109] [NEW] Swift UI failures when deleting large numbers of objects

2016-05-20 Thread Travis Tripp
Public bug reported:

Failed to establish a new connection: [Errno 24] Too many open files',))

Basically I first create a bunch of containers objects and folders with this 
script:
 - 
https://github.com/openstack/searchlight/blob/master/test-scripts/generate-swift-data.py
 -  ./generate-swift-data.py 1000 5 10 
Then I add an empty nested sub-folder foo/bar/hello

Then I try to delete all.

I got an error 500

The browser screen freezes and I cannot do anything.

I manually refresh.

I then re-select all and click delete.

It works.

But I am able to make it do the above again.

Screen shot: http://imgur.com/a/4t4AV

ConnectionError: HTTPConnectionPool(host='192.168.200.200', port=8080): Max 
retries exceeded with url: 
/v1/AUTH_4bade81378e6428db0e896db77d68e02/scale_3/BL/object_669 (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 24] Too many open 
files',))
[20/May/2016 15:11:20] "DELETE 
/api/swift/containers/scale_3/object/BL/object_669 HTTP/1.1" 500 332
HTTP exception with no status/code
Traceback (most recent call last):
  File 
"/Users/ttripp/dev/openstack/horizon/openstack_dashboard/api/rest/utils.py", 
line 126, in _wrapped
data = function(self, request, *args, **kw)
  File 
"/Users/ttripp/dev/openstack/horizon/openstack_dashboard/api/rest/swift.py", 
line 211, in delete
api.swift.swift_delete_object(request, container, object_name)
  File "/Users/ttripp/dev/openstack/horizon/openstack_dashboard/api/swift.py", 
line 314, in swift_delete_object
swift_api(request).delete_object(container_name, object_name)
  File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/swiftclient/client.py",
 line 1721, in delete_object
response_dict=response_dict)
  File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/swiftclient/client.py",
 line 1565, in _retry
service_token=self.service_token, **kwargs)
  File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/swiftclient/client.py",
 line 1369, in delete_object
conn.request('DELETE', path, '', headers)
  File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/swiftclient/client.py",
 line 401, in request
files=files, **self.requests_args)
  File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/swiftclient/client.py",
 line 384, in _request
return self.request_session.request(*arg, **kwarg)
  File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/requests/sessions.py",
 line 475, in request
resp = self.send(prep, **send_kwargs)
  File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/requests/sessions.py",
 line 585, in send
r = adapter.send(request, **kwargs)
  File 
"/Users/ttripp/dev/openstack/horizon/.venv/lib/python2.7/site-packages/requests/adapters.py",
 line 467, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='192.168.200.200', port=8080): Max 
retries exceeded with url: 
/v1/AUTH_4bade81378e6428db0e896db77d68e02/scale_3/BL/object_667 (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 24] Too many open 
files',))
[20/May/2016 15:11:20] "DELETE 
/api/swift/containers/scale_3/object/BL/object_667 HTTP/1.1" 500 332

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1584109

Title:
  Swift UI failures when deleting large numbers of objects

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Failed to establish a new connection: [Errno 24] Too many open
  files',))

  Basically I first create a bunch of containers objects and folders with this 
script:
   - 
https://github.com/openstack/searchlight/blob/master/test-scripts/generate-swift-data.py
   -  ./generate-swift-data.py 1000 5 10 
  Then I add an empty nested sub-folder foo/bar/hello

  Then I try to delete all.

  I got an error 500

  The browser screen freezes and I cannot do anything.

  I manually refresh.

  I then re-select all and click delete.

  It works.

  But I am able to make it do the above again.

  Screen shot: http://imgur.com/a/4t4AV

  ConnectionError: HTTPConnectionPool(host='192.168.200.200', port=8080): Max 
retries exceeded with url: 
/v1/AUTH_4bade81378e6428db0e896db77d68e02/scale_3/BL/object_669 (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 24] Too many open 
files',))
  [20/May/2016 15:11:20] "DELETE 
/api/swift/containers/scale_3/object/BL/object_669 HTTP/1.1" 500 332
  HTTP exception with no status/code
  Traceback (most recent call last):
File 
"/Users/ttripp/dev/openstack/horizon/openstack_dashboard/api/rest/utils.py", 
line 126, in _wrapped
  data = function(self, request, *args, **kw)
File 

[Yahoo-eng-team] [Bug 1579769] Re: neutron agent restart slow when bridge ports are missing

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/314112
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d526e720f73cbd79e54bbed18fe3009cad1cee10
Submitter: Jenkins
Branch:master

commit d526e720f73cbd79e54bbed18fe3009cad1cee10
Author: Sreekumar S 
Date:   Mon May 9 19:17:27 2016 +0530

Fix for 'ofport' query retries during neutron agent start

When agent starts up, it checks whether patch ports exists
before adding them. But the routine used to query the
patch port's existence is get_port_ofport() which retries
the opertation because of the @_ofport_retry decoration.
This creates an unwanted delay in the startup of the
agent, when the port do not exist.
The port's existence can be checked with port_exists()
call on the bridge with no retries.

Change-Id: I9fac0066d6c03491536a6e2718d6340acd275d9d
Closes-Bug: #1579769


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579769

Title:
  neutron agent restart slow when bridge ports are missing

Status in neutron:
  Fix Released

Bug description:
  When bridge ports are missing for br-int and br-tun and agent starts up, it 
checks for patch ports int-br-ex and phy-br-ex before adding those. But here 
the function used to check their existence is get_port_ofport() which retries 
it because of the @_ofport_retry decoration.
  This causes the restart to become unnecessarily slow because of the retries. 
get_port_ofport() should be used only when the port is requested to be created, 
and the code waits for the ports to be created before proceeding further. We 
could've just checked for the port's existence with a call to port_exists() on 
the bridge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1579769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583948] Re: getting whole user-roles in domain or project in V3

2016-05-20 Thread Henry Nash
This is not a bug, it is working as designed. The list grants API only
lists explicit grants. If you want to see "effective" grants, you should
use he List Assignments API.

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1583948

Title:
  getting  whole user-roles in domain or project in V3

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  If one user joins a group and the group has the domain roles,
  now, we could not get the whole user-roles from the domain, but
  the user should have the group-domain roles.(the user belongs to 
  the domain.)

  Eg. Group1 has role1 in domain1, and the user from domain1 joins the Group1,
  in fact, in V3, the user should has role1, but actually now, we cannot get 
roles
  from /v3/domains/domain1/users/user/roles.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1583948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584057] [NEW] Some horizon integration tests take considerable longer time / fail more often that others

2016-05-20 Thread Timur Sufiev
Public bug reported:

It would be better for the overall community's mental health, if those
flaky tests were temporarily disabled until the source of their
flakiness becomes more evident.

The list of tests to be disabled:

TestStacks
  test_create_delete_stack

TestVolumeSnapshotsAdmin
  test_create_edit_delete_volume_snapshot
  test_volume_snapshots_pagination

TestVolumesActions
  test_volume_upload_to_image

TestDownloadRCFile
  test_download_rc_v3_file

TestImagesAdmin
  test_image_create_delete

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1584057

Title:
  Some horizon integration tests take considerable longer time / fail
  more often that others

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  It would be better for the overall community's mental health, if those
  flaky tests were temporarily disabled until the source of their
  flakiness becomes more evident.

  The list of tests to be disabled:

  TestStacks
test_create_delete_stack

  TestVolumeSnapshotsAdmin
test_create_edit_delete_volume_snapshot
test_volume_snapshots_pagination

  TestVolumesActions
test_volume_upload_to_image

  TestDownloadRCFile
test_download_rc_v3_file

  TestImagesAdmin
test_image_create_delete

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1584057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584055] [NEW] Swift UI builds a breadcrumb from the URL regardless of existence

2016-05-20 Thread Rob Cresswell
Public bug reported:

The Angular Swift UI seems overly optimistic in its construction of the
breadcrumb. To recreate:

1) Create a container named "one" and a folder named "two". Notice the URL is 
"/project/containers/container/one/two"
2) Upload any object, just as a reference point.
3) Refresh the page. A '/' is added, and now you're in a folder containing no 
objects. Use the breadcrumb to go back to 'one' and then click on 'two'. You 
are now back in your folder.

Alternatively, go to
"/project/containers/container/one/two/three". This doesn't
exist and renders as an empty folder with a constructed breadcrumb.
Instead, it should redirect (probably to the base URL)

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: angularjs swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1584055

Title:
  Swift UI builds a breadcrumb from the URL regardless of existence

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Angular Swift UI seems overly optimistic in its construction of
  the breadcrumb. To recreate:

  1) Create a container named "one" and a folder named "two". Notice the URL is 
"/project/containers/container/one/two"
  2) Upload any object, just as a reference point.
  3) Refresh the page. A '/' is added, and now you're in a folder containing no 
objects. Use the breadcrumb to go back to 'one' and then click on 'two'. You 
are now back in your folder.

  Alternatively, go to
  "/project/containers/container/one/two/three". This
  doesn't exist and renders as an empty folder with a constructed
  breadcrumb. Instead, it should redirect (probably to the base URL)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1584055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1174499] Re: Keystone token hashing is MD5

2016-05-20 Thread Sharat Sharma
** Changed in: openstack-api-site
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1174499

Title:
  Keystone token hashing is MD5

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in openstack-api-site:
  Invalid
Status in python-keystoneclient:
  Fix Released

Bug description:
  https://github.com/openstack/python-
  keystoneclient/blob/master/keystoneclient/common/cms.py

  def cms_hash_token(token_id):
  """
  return: for ans1_token, returns the hash of the passed in token
  otherwise, returns what it was passed in.
  """
  if token_id is None:
  return None
  if is_ans1_token(token_id):
  hasher = hashlib.md5()
  hasher.update(token_id)
  return hasher.hexdigest()
  else:
  return token_id

  
  MD5 is a deprecated mechanism, it should be replaces with at least SHA1, if 
not SHA256.
  Keystone should be able to support multiple Hash types, and the auth_token 
middleware should query Keystone to find out which type is in use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1174499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424970] Re: VMware: Router Type Extension Support

2016-05-20 Thread Sharat Sharma
** Project changed: openstack-api-site => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424970

Title:
  VMware: Router Type Extension Support

Status in neutron:
  Incomplete

Bug description:
  https://review.openstack.org/157377
  commit a8ed4ad3d345187bb314c7c574a1476de1a16584
  Author: Gary Kotton 
  Date:   Thu Feb 19 05:55:14 2015 -0800

  VMware: Router Type Extension Support
  
  Provide a router type extension. This will enable defining the
  type of edge appliance. That is, an exclusive edge or a shared
  edge for routing services.
  
  DocImpact
  
  Change-Id: I1464d0390c0b3ee7658e7955e6433f2ac078a5fe

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1424970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424970] [NEW] VMware: Router Type Extension Support

2016-05-20 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

https://review.openstack.org/157377
commit a8ed4ad3d345187bb314c7c574a1476de1a16584
Author: Gary Kotton 
Date:   Thu Feb 19 05:55:14 2015 -0800

VMware: Router Type Extension Support

Provide a router type extension. This will enable defining the
type of edge appliance. That is, an exclusive edge or a shared
edge for routing services.

DocImpact

Change-Id: I1464d0390c0b3ee7658e7955e6433f2ac078a5fe

** Affects: neutron
 Importance: Undecided
 Assignee: Gary Kotton (garyk)
 Status: Incomplete


** Tags: neutron vmware
-- 
VMware: Router Type Extension Support
https://bugs.launchpad.net/bugs/1424970
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583503] Re: keepalived fails to start when PID file is empty

2016-05-20 Thread Hynek Mlnarik
Closing for neutron, duplicate of bug 1561046, fix for neutron ubuntu
package was released with version 8.1.0

** Changed in: neutron
   Status: Confirmed => Fix Released

** Changed in: neutron
 Assignee: Hynek Mlnarik (hmlnarik-s) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583503

Title:
  keepalived fails to start when PID file is empty

Status in neutron:
  Fix Released
Status in keepalived package in Ubuntu:
  Fix Released
Status in keepalived source package in Trusty:
  Triaged
Status in keepalived source package in Xenial:
  Triaged

Bug description:
  After a crash of a network node, we were left with empty PID files for
  some keepalived processes:

   root@network-node14:~# ls -l 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  -rw-r--r-- 1 root root 0 May 19 08:41 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid

  This causes the L3 agent to log the following errors repeating every
  minute:

  2016-05-19 08:46:44.525 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.external_process [-] 
keepalived for router with uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882 not found. 
The process should not have died
  2016-05-19 08:46:44.526 13554 WARNING neutron.agent.linux.external_process 
[-] Respawning keepalived for uuid 0ab5f647-1e04-4345-ae9b-ee66c6f08882
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid
  2016-05-19 08:46:44.526 13554 ERROR neutron.agent.linux.utils [-] Unable to 
convert value in 
/var/lib/neutron/ha_confs/0ab5f647-1e04-4345-ae9b-ee66c6f08882.pid-vrrp

  and the keepalived process fails to start. As a result, the routers
  hosted by this agent are non-functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543327] Re: Angular controllers (routes being evaluated) on django navbar clicks (hash routing)

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/317293
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=18b351b9c47f180118b10dcf3229e6265c811493
Submitter: Jenkins
Branch:master

commit 18b351b9c47f180118b10dcf3229e6265c811493
Author: wangbo 
Date:   Tue May 17 15:12:40 2016 +0800

Angular pages will reload if collapse/expand sidebar

use ng-images as example to explan the issue:
1.enable ng-images page and go into this page.
2.click left sidebar to collapse/expand a dashboard or panel-group
3.the url change to "project/ngimages/#sidebar-accordion-***" which
  match the route[1]. Current page will reload even it's not change.

Use "data-target" instead of "href" to fix it. ref:[2]

[1]https://github.com/openstack/horizon/blob/master/
openstack_dashboard/static/app/core/images/images.module.js#L190
[2]https://github.com/openstack/xstatic-bootstrap-scss/blob/
master/xstatic/pkg/bootstrap_scss/data/js/bootstrap.js#L505

Change-Id: I1c84c6af49a67bf2833ad5b0103f6cbd4abd0ddb
Closes-Bug: #1543327


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543327

Title:
  Angular controllers (routes being evaluated) on django navbar clicks
  (hash routing)

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  While evaluating another patch, I found that if I'm on an angular page
  (ng-images) that whenever I click an accordion on the horizon navbar
  (e.g. start on Project --> Images, then click Admin) this is causing
  the current angular controller to refresh.

  
  See picture, but note that I have paused the debugger in a Keystone API hit, 
that you can see the requests in the terminal window, and that in the URL you 
can see #sidebar-accordion-admin.

  http://pasteboard.co/1pw530qO.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582407] Re: Add setting default max_burst value if not given by user

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/317645
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=69d3e328d987092fb560a84a64856fa86a964822
Submitter: Jenkins
Branch:master

commit 69d3e328d987092fb560a84a64856fa86a964822
Author: Sławek Kapłoński 
Date:   Tue May 17 17:32:41 2016 +

[networking] Add QoS default burst value

Add note about default burst value in underlying
implementation of QoS bandwidth limit rules for Open
vSwitch and Linux bridge agents.

Change-Id: I4e8af070128c33634b14393ec2d964c1f55c6dc1
Closes-Bug: #1582407
Co-Authored-By: Matt Kassawara 


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582407

Title:
  Add setting default max_burst value if not given by user

Status in neutron:
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/311609
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit f766fc71bedc58a0cff0c6dc6e5576f0ab5e2507
  Author: Sławek Kapłoński 
  Date:   Sat Apr 30 00:53:22 2016 +0200

  Add setting default max_burst value if not given by user
  
  If user will not provide burst value for bandwidth limit rule there is
  need to set such value on appropriate to ensure that bandwith limit
  will work fine.
  To ensure at least proper limitation of TCP traffic we decided to set
  burst as 80% of bw_limit value.
  LinuxBridge agent already sets is like that. This patch add same
  behaviour for QoS extension driver in openvswitch L2 agent.
  
  DocImpact: Add setting default max_burst value in LB and OvS drivers
  
  Change-Id: Ib12a7cbf88cdffd10c8f6f8442582bf7377ca27d
  Closes-Bug: #1572670

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582407/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583999] [NEW] BDM is not deleted if an instance booted from volume and failed on schedule stage

2016-05-20 Thread Jiajun Liu
Public bug reported:

Description


I did some test on boot from volume instance. I found that sometime the
instance boot from volume will fail on evacuate operation. After some
dig, I found evacuate operation failed due to the conductor service
returned wrong block device mapping which has no connection info. After
some more dig, I found there are some BDM should NOT exists because it
belongs to a deleted instance. After some more test, I found a way to
reproduce this problem.

Steps to reproduce

1, create a volume from image (image-volume1)
2, stop or disable all nova-compute
3, boot an instance (bfv1) from volume (image-volume1)
4, wait the instance became ERROR state
5, delete the instance will just created
6, look at block_device_mapping table of nova database and found instance's 
block device mapping still exists
7, boot another instance (bfv2) from voluem (image-volume1)
8, execute evacuate operation on bfv2
9, evacuate operation failed and bfv2 became ERROR.

Environment

* centos 7
* liberty openstack

I looked at the master branch code. This bug still exists.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583999

Title:
  BDM  is not deleted if an instance booted from volume and failed on
  schedule stage

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  

  I did some test on boot from volume instance. I found that sometime
  the instance boot from volume will fail on evacuate operation. After
  some dig, I found evacuate operation failed due to the conductor
  service returned wrong block device mapping which has no connection
  info. After some more dig, I found there are some BDM should NOT
  exists because it belongs to a deleted instance. After some more test,
  I found a way to reproduce this problem.

  Steps to reproduce
  
  1, create a volume from image (image-volume1)
  2, stop or disable all nova-compute
  3, boot an instance (bfv1) from volume (image-volume1)
  4, wait the instance became ERROR state
  5, delete the instance will just created
  6, look at block_device_mapping table of nova database and found instance's 
block device mapping still exists
  7, boot another instance (bfv2) from voluem (image-volume1)
  8, execute evacuate operation on bfv2
  9, evacuate operation failed and bfv2 became ERROR.

  Environment
  
  * centos 7
  * liberty openstack

  I looked at the master branch code. This bug still exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1583999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583977] Re: liberty neutron-l3-agent ha failes to spawn keepalived

2016-05-20 Thread Tobias Urdin
Moving the pid files for the affected router solves the issue.
mv /var/lib/neutron/ha_confs/c1cc1a5d-c0ef-47b7-8d5c-88403e134725.* /root

Found fix thanks to frickler on IRC. It has been merged for liberty
https://review.openstack.org/#/c/299138/3

** Changed in: cloud-archive
   Status: New => Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583977

Title:
  liberty neutron-l3-agent ha failes to spawn keepalived

Status in Ubuntu Cloud Archive:
  Invalid
Status in neutron:
  Invalid

Bug description:
  After upgrading to 7.0.4 I have several routers that fails to spawn
  the keepalived process.

  The logs say
  2016-05-20 11:01:11.181 23023 ERROR neutron.agent.linux.external_process [-] 
default-service for router with uuid c1cc1a5d-c0ef-47b7-8d5c-88403e134725 not 
found. The process should not have died
  2016-05-20 11:01:11.181 23023 ERROR neutron.agent.linux.external_process [-] 
respawning keepalived for uuid c1cc1a5d-c0ef-47b7-8d5c-88403e134725
  2016-05-20 11:01:11.182 23023 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-c1cc1a5d-c0ef-47b7-8d5c-88403e134725', 
'keepalived', '-P', '-f', 
'/var/lib/neutron/ha_confs/c1cc1a5d-c0ef-47b7-8d5c-88403e134725/keepalived.conf',
 '-p', '/var/lib/neutron/ha_confs/c1cc1a5d-c0ef-47b7-8d5c-88403e134725.pid', 
'-r', 
'/var/lib/neutron/ha_confs/c1cc1a5d-c0ef-47b7-8d5c-88403e134725.pid-vrrp'] 
create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:85

  All these spawns fail and keepalived outputs to syslog
  May 20 11:01:11 neutron1 Keepalived[46558]: Starting Keepalived v1.2.19 
(09/04,2015)
  May 20 11:01:11 neutron1 Keepalived[46558]: daemon is already running

  but the daemon is not running
  the only thing running is the neutron-keepalived-state-change

  root@neutron1:~# ps auxf | grep c1cc1a5d
  root 48137  0.0  0.0  11740   936 pts/4S+   11:03   0:00  |   \_ 
grep --color=auto c1cc1a5d
  neutron  21671  0.0  0.0 124924 40172 ?SMay19   0:00 
/usr/bin/python /usr/bin/neutron-keepalived-state-change 
--router_id=c1cc1a5d-c0ef-47b7-8d5c-88403e134725 
--namespace=qrouter-c1cc1a5d-c0ef-47b7-8d5c-88403e134725 
--conf_dir=/var/lib/neutron/ha_confs/c1cc1a5-c0ef-47b7-8d5c-88403e134725 
--monitor_interface=ha-ef4e2a2f-66 --monitor_cidr=169.254.0.1/24 
--pid_file=/var/lib/neutron/external/pids/c1cc1a5d-c0ef-47b7-8d5c-88403e134725.monitor.pid
 --state_path=/var/lib/neutron --user=107 --group=112

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: neutron-l3-agent 2:7.0.4-0ubuntu1~cloud0 [origin: Canonical]
  ProcVersionSignature: Ubuntu 3.13.0-86.131-generic 3.13.11-ckt39
  Uname: Linux 3.13.0-86-generic x86_64
  NonfreeKernelModules: hcpdriver
  ApportVersion: 2.14.1-0ubuntu3.20
  Architecture: amd64
  CrashDB:
   {
  "impl": "launchpad",
  "project": "cloud-archive",
  "bug_pattern_url": 
"http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml;,
   }
  Date: Fri May 20 11:00:01 2016
  PackageArchitecture: all
  SourcePackage: neutron
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1583977/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582816] Re: ModalBackdropMixin imposes empty init method

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/317628
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=bb1547c8abcc0d0d3dc4a3199e8e8bd1b5527c12
Submitter: Jenkins
Branch:master

commit bb1547c8abcc0d0d3dc4a3199e8e8bd1b5527c12
Author: Kirill Zaitsev 
Date:   Tue May 17 19:59:40 2016 +0300

Add *args, **kwargs to ModalBackdropMixin's init method

Before this change ModalBackdropMixin called super(...).__init__()
without arguments. This imposed restrictions on what classes this
mixin could have been mixed into, i.e. only classes, that do not accept
any parameters.
This proved to be a problem for murano-dashboard, since it uses this
mixin (indirectly through ModalFormMixin) and mixes it into a class,
that accepts parameters to it's init method.
This change allows to use ModalBackdropMixin with classes that have
init methods with parameters.

Change-Id: I6155476738021b784ef7e643c968f1d784b15906
Closes-Bug: #1582816


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1582816

Title:
  ModalBackdropMixin imposes empty init method

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Recently a commit landed in horizon, that changed inheritance order of
  Mixins.
  
https://github.com/openstack/horizon/commit/4c33d2d40cac592385f7bcfbc106c379d7b70020
  The change itself is ok, however this now means, that any class
  inherited from ModalFormMixin should have an init without parameters.

  This change broke murano-dashboard since we inherit from both formtools 
wizard and ModalFormMixin.
  here is an example of the errors we get 
http://paste.openstack.org/show/497387/

  It might be a good idea to allow *args, **kwargs in
  ModalBackdropMixin's init method

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1582816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583977] [NEW] liberty neutron-l3-agent ha failes to spawn keepalived

2016-05-20 Thread Tobias Urdin
Public bug reported:

After upgrading to 7.0.4 I have several routers that fails to spawn the
keepalived process.

The logs say
2016-05-20 11:01:11.181 23023 ERROR neutron.agent.linux.external_process [-] 
default-service for router with uuid c1cc1a5d-c0ef-47b7-8d5c-88403e134725 not 
found. The process should not have died
2016-05-20 11:01:11.181 23023 ERROR neutron.agent.linux.external_process [-] 
respawning keepalived for uuid c1cc1a5d-c0ef-47b7-8d5c-88403e134725
2016-05-20 11:01:11.182 23023 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-c1cc1a5d-c0ef-47b7-8d5c-88403e134725', 
'keepalived', '-P', '-f', 
'/var/lib/neutron/ha_confs/c1cc1a5d-c0ef-47b7-8d5c-88403e134725/keepalived.conf',
 '-p', '/var/lib/neutron/ha_confs/c1cc1a5d-c0ef-47b7-8d5c-88403e134725.pid', 
'-r', 
'/var/lib/neutron/ha_confs/c1cc1a5d-c0ef-47b7-8d5c-88403e134725.pid-vrrp'] 
create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:85

All these spawns fail and keepalived outputs to syslog
May 20 11:01:11 neutron1 Keepalived[46558]: Starting Keepalived v1.2.19 
(09/04,2015)
May 20 11:01:11 neutron1 Keepalived[46558]: daemon is already running

but the daemon is not running
the only thing running is the neutron-keepalived-state-change

root@neutron1:~# ps auxf | grep c1cc1a5d
root 48137  0.0  0.0  11740   936 pts/4S+   11:03   0:00  |   \_ 
grep --color=auto c1cc1a5d
neutron  21671  0.0  0.0 124924 40172 ?SMay19   0:00 
/usr/bin/python /usr/bin/neutron-keepalived-state-change 
--router_id=c1cc1a5d-c0ef-47b7-8d5c-88403e134725 
--namespace=qrouter-c1cc1a5d-c0ef-47b7-8d5c-88403e134725 
--conf_dir=/var/lib/neutron/ha_confs/c1cc1a5-c0ef-47b7-8d5c-88403e134725 
--monitor_interface=ha-ef4e2a2f-66 --monitor_cidr=169.254.0.1/24 
--pid_file=/var/lib/neutron/external/pids/c1cc1a5d-c0ef-47b7-8d5c-88403e134725.monitor.pid
 --state_path=/var/lib/neutron --user=107 --group=112

ProblemType: Bug
DistroRelease: Ubuntu 14.04
Package: neutron-l3-agent 2:7.0.4-0ubuntu1~cloud0 [origin: Canonical]
ProcVersionSignature: Ubuntu 3.13.0-86.131-generic 3.13.11-ckt39
Uname: Linux 3.13.0-86-generic x86_64
NonfreeKernelModules: hcpdriver
ApportVersion: 2.14.1-0ubuntu3.20
Architecture: amd64
CrashDB:
 {
"impl": "launchpad",
"project": "cloud-archive",
"bug_pattern_url": 
"http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml;,
 }
Date: Fri May 20 11:00:01 2016
PackageArchitecture: all
SourcePackage: neutron
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: cloud-archive
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug regression-update third-party-packages trusty

** Also affects: neutron
   Importance: Undecided
   Status: New

** Tags added: regression-update

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583977

Title:
  liberty neutron-l3-agent ha failes to spawn keepalived

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  New

Bug description:
  After upgrading to 7.0.4 I have several routers that fails to spawn
  the keepalived process.

  The logs say
  2016-05-20 11:01:11.181 23023 ERROR neutron.agent.linux.external_process [-] 
default-service for router with uuid c1cc1a5d-c0ef-47b7-8d5c-88403e134725 not 
found. The process should not have died
  2016-05-20 11:01:11.181 23023 ERROR neutron.agent.linux.external_process [-] 
respawning keepalived for uuid c1cc1a5d-c0ef-47b7-8d5c-88403e134725
  2016-05-20 11:01:11.182 23023 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-c1cc1a5d-c0ef-47b7-8d5c-88403e134725', 
'keepalived', '-P', '-f', 
'/var/lib/neutron/ha_confs/c1cc1a5d-c0ef-47b7-8d5c-88403e134725/keepalived.conf',
 '-p', '/var/lib/neutron/ha_confs/c1cc1a5d-c0ef-47b7-8d5c-88403e134725.pid', 
'-r', 
'/var/lib/neutron/ha_confs/c1cc1a5d-c0ef-47b7-8d5c-88403e134725.pid-vrrp'] 
create_process /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:85

  All these spawns fail and keepalived outputs to syslog
  May 20 11:01:11 neutron1 Keepalived[46558]: Starting Keepalived v1.2.19 
(09/04,2015)
  May 20 11:01:11 neutron1 Keepalived[46558]: daemon is already running

  but the daemon is not running
  the only thing running is the neutron-keepalived-state-change

  root@neutron1:~# ps auxf | grep c1cc1a5d
  root 48137  0.0  0.0  11740   936 pts/4S+   11:03   0:00  |   \_ 
grep --color=auto c1cc1a5d
  neutron  21671  0.0  0.0 124924 40172 ?SMay19   0:00 
/usr/bin/python /usr/bin/neutron-keepalived-state-change 
--router_id=c1cc1a5d-c0ef-47b7-8d5c-88403e134725 

[Yahoo-eng-team] [Bug 1583511] Re: when provisioning_status of loadbalancer is error, we create listener, then provisioning_status of loadbalancer is always PENDING_UPDATE

2016-05-20 Thread dongjuan
** Project changed: openstack-api-site => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583511

Title:
  when provisioning_status of loadbalancer is error,we create
  listener,then provisioning_status of loadbalancer is always
  PENDING_UPDATE

Status in neutron:
  New

Bug description:
  issue is in kilo branch

  when provisioning_status of loadbalancer is error, we create listener,
  then provisioning_status of loadbalancer is always PENDING_UPDATE
  and provisioning_status of listener is always PENDING_CREATE

  lb agent log is:
  2013-11-19 13:48:18.227 21025 ERROR oslo_messaging.rpc.dispatcher 
[req-ca5b3552-240a-461a-bd37-8467b465c096 ] Exception during message handling: 
An unknown exception occurred.
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
296, in delete_loadbalancer
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher driver 
= self._get_driver(loadbalancer.id)
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
170, in _get_driver
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher raise 
DeviceNotFoundOnAgent(loadbalancer_id=loadbalancer_id)
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher 
DeviceNotFoundOnAgent: An unknown exception occurred.
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582522] Re: Pecan: derpecation warnings with functional tests

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/317254
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=d2630f2dd43148e08aba1ef4f17ad3465350d994
Submitter: Jenkins
Branch:master

commit d2630f2dd43148e08aba1ef4f17ad3465350d994
Author: Gary Kotton 
Date:   Mon May 16 22:21:04 2016 -0700

Pecan: remove deprecation warning

Addresses the warning below:

DeprecationWarning: BaseException.message has been deprecated as of Python 
2.6

Change-Id: I1aaebb28c3c9ca2ede9e900428fb5c7eef6d29e6
Closes-bug: #1582522


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1582522

Title:
  Pecan: derpecation warnings with functional tests

Status in neutron:
  Fix Released

Bug description:
  Functional tests show:

  2016-05-17 02:57:54.378 | 2016-05-17 02:57:54.326 | 
  2016-05-17 02:58:05.690 | 2016-05-17 02:58:05.638 | 
neutron/pecan_wsgi/hooks/policy_enforcement.py:166: DeprecationWarning: 
BaseException.message has been deprecated as of Python 2.6
  2016-05-17 02:58:05.692 | 2016-05-17 02:58:05.640 |   raise 
webob.exc.HTTPForbidden(e.message)
  2016-05-17 02:58:05.693 | 2016-05-17 02:58:05.641 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1582522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583955] Re: provisioning_status of loadbalancer is always PENDING_UPDATE when following these steps

2016-05-20 Thread dongjuan
** Project changed: openstack-api-site => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583955

Title:
  provisioning_status of loadbalancer is always PENDING_UPDATE  when
  following these steps

Status in neutron:
  New

Bug description:
  issue is in kilo branch;

  following these steps:
  1. update admin_state_up of loadbalancer to False
  2. restart lbaas agent
  3. update admin_state_up of loadbalancer to True

  then the provisioning_status of loadbalancer is always PENDING_UPDATE

  agent log is:
  2013-11-20 12:33:54.358 12601 ERROR oslo_messaging.rpc.dispatcher 
[req-add12f1f-f693-4f0b-9eae-5204d8a50a3f ] Exception during message handling: 
An unknown exception occurred.
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
282, in update_loadbalancer
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher driver 
= self._get_driver(loadbalancer.id)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
168, in _get_driver
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher raise 
DeviceNotFoundOnAgent(loadbalancer_id=loadbalancer_id)
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher 
DeviceNotFoundOnAgent: An unknown exception occurred.
  2013-11-20 12:33:54.358 12601 TRACE oslo_messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583955/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583511] [NEW] when provisioning_status of loadbalancer is error, we create listener, then provisioning_status of loadbalancer is always PENDING_UPDATE

2016-05-20 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

issue is in kilo branch

when provisioning_status of loadbalancer is error, we create listener,
then provisioning_status of loadbalancer is always PENDING_UPDATE
and provisioning_status of listener is always PENDING_CREATE

lb agent log is:
2013-11-19 13:48:18.227 21025 ERROR oslo_messaging.rpc.dispatcher 
[req-ca5b3552-240a-461a-bd37-8467b465c096 ] Exception during message handling: 
An unknown exception occurred.
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
296, in delete_loadbalancer
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher driver = 
self._get_driver(loadbalancer.id)
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
170, in _get_driver
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher raise 
DeviceNotFoundOnAgent(loadbalancer_id=loadbalancer_id)
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher 
DeviceNotFoundOnAgent: An unknown exception occurred.
2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas
-- 
when provisioning_status of loadbalancer is error,we create listener,then 
provisioning_status of loadbalancer is always PENDING_UPDATE
https://bugs.launchpad.net/bugs/1583511
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583958] [NEW] non-admin can create and list bgpspeakers but can not show or update speaker

2016-05-20 Thread flynnmmm
Public bug reported:

Here is the configuration:
[root@SG-dev-flynn-3-fwaas devstack]# source openrc demo demo
WARNING: setting legacy OS_TENANT_NAME to support cli tools.
[root@SG-dev-flynn-3-fwaas devstack]# nova list
++--+++-+--+
| ID | Name | Status | Task State | Power State | Networks |
++--+++-+--+
++--+++-+--+
[root@SG-dev-flynn-3-fwaas devstack]# neutron bgp-speaker-create --ip-version 4 
--local-as 777 106
Created a new bgp_speaker:
+---+--+
| Field | Value|
+---+--+
| advertise_floating_ip_host_routes | True |
| advertise_tenant_networks | True |
| id| ee03ac3c-81c8-46ce-abad-e3fac4877e87 |
| ip_version| 4|
| local_as  | 777  |
| name  | 106  |
| networks  |  |
| peers |  |
| tenant_id | 01c10991df8749d8a79694dad6dfb836 |
+---+--+
[root@SG-dev-flynn-3-fwaas devstack]# neutron bgp-speaker-create --ip-version 4 
--local-as 7788 101 
Created a new bgp_speaker:
+---+--+
| Field | Value|
+---+--+
| advertise_floating_ip_host_routes | True |
| advertise_tenant_networks | True |
| id| cb0a27e5-42a6-44c1-914b-9bce85a4d1e1 |
| ip_version| 4|
| local_as  | 7788 |
| name  | 101  |
| networks  |  |
| peers |  |
| tenant_id | 01c10991df8749d8a79694dad6dfb836 |
+---+--+
[root@SG-dev-flynn-3-fwaas devstack]# neutron bgp-speaker-list
+--+--+--++
| id   | name | local_as | ip_version |
+--+--+--++
| cb0a27e5-42a6-44c1-914b-9bce85a4d1e1 | 101  | 7788 |  4 |
| ee03ac3c-81c8-46ce-abad-e3fac4877e87 | 106  |  777 |  4 |
+--+--+--++
[root@SG-dev-flynn-3-fwaas devstack]# neutron bgp-speaker-show 
cb0a27e5-42a6-44c1-914b-9bce85a4d1e1
Failed to check policy tenant_id:%(tenant_id)s because Unable to verify 
match:%(tenant_id)s as the parent resource: tenant was not found.
Neutron server returns request_ids: ['req-bff87635-2767-4bfd-b6e0-cc1399136d88']
[root@SG-dev-flynn-3-fwaas devstack]# neutron bgp-speaker-show 101  
   
Failed to check policy tenant_id:%(tenant_id)s because Unable to verify 
match:%(tenant_id)s as the parent resource: tenant was not found.
Neutron server returns request_ids: ['req-fd336b49-70e3-4a20-ba2d-9ca9889ea05c']
[root@SG-dev-flynn-3-fwaas devstack]# neutron bgp-speaker-show 106
Failed to check policy tenant_id:%(tenant_id)s because Unable to verify 
match:%(tenant_id)s as the parent resource: tenant was not found.
Neutron server returns request_ids: ['req-70354c3c-d59a-4f69-ba3a-54edbce12e44']
[root@SG-dev-flynn-3-fwaas devstack]# neutron bgp-speaker-update 
--advertise-floating-ip-host-routes=False 106
Failed to check policy tenant_id:%(tenant_id)s because Unable to verify 
match:%(tenant_id)s as the parent resource: tenant was not found.
Neutron server returns request_ids: ['req-a13edca8-7d55-4568-a94f-a6bd228923fc']

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-bgp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583958

Title:
  non-admin can create and list bgpspeakers but can not show or update
  speaker

Status in neutron:
  New

Bug description:
  Here is the configuration:
  [root@SG-dev-flynn-3-fwaas devstack]# source openrc demo demo
  WARNING: setting legacy OS_TENANT_NAME 

[Yahoo-eng-team] [Bug 1583918] Re: why all mechanism_drivers in configuration will be excuted when any vnic_type VM created

2016-05-20 Thread QunyingRan
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583918

Title:
  why  all mechanism_drivers in configuration will be excuted when any
  vnic_type VM created

Status in neutron:
  Invalid

Bug description:
  Mechanism_drivers some used for ovs VN, and others used for direct, macvtap, 
and so on, but when we create any VM, the all mechanism_drivers will be 
executed and not select suitable Mechanism_driver to bind port. 
  For example, 'openvswitch' and 'sriovnicswitch' in ml2 configuration 
mechanism_drivers, and then created a sriov VM, only 'bind_port' in 
'SriovNicSwitchMechanismDriver' should be run,  but we found function 
'bind_port' in OpenvswitchMechanismDriver also has been used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583919] Re: --no-ssl-compression is deprecated

2016-05-20 Thread Sharat Sharma
** Project changed: glance => python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1583919

Title:
  --no-ssl-compression is deprecated

Status in python-glanceclient:
  New

Bug description:
  The glance --help gives the following in optional arguments:

   --no-ssl-compression  DEPRECATED! This option is deprecated and not used
  anymore. SSL compression should be disabled by default
  by the system SSL library.

  The deprecated option must not be shown in the help message to avoid
  confusion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1583919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583948] [NEW] getting whole user-roles in domain or project in V3

2016-05-20 Thread yangweiwei
Public bug reported:

If one user joins a group and the group has the domain roles,
now, we could not get the whole user-roles from the domain, but
the user should have the group-domain roles.(the user belongs to 
the domain.)

Eg. Group1 has role1 in domain1, and the user from domain1 joins the Group1,
in fact, in V3, the user should has role1, but actually now, we cannot get roles
from /v3/domains/domain1/users/user/roles.

** Affects: keystone
 Importance: Undecided
 Assignee: yangweiwei (496176919-6)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1583948

Title:
  getting  whole user-roles in domain or project in V3

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  If one user joins a group and the group has the domain roles,
  now, we could not get the whole user-roles from the domain, but
  the user should have the group-domain roles.(the user belongs to 
  the domain.)

  Eg. Group1 has role1 in domain1, and the user from domain1 joins the Group1,
  in fact, in V3, the user should has role1, but actually now, we cannot get 
roles
  from /v3/domains/domain1/users/user/roles.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1583948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583923] Re: deprecated arguments in the help message

2016-05-20 Thread Sharat Sharma
** Project changed: glance => python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1583923

Title:
  deprecated arguments in the help message

Status in python-glanceclient:
  Confirmed

Bug description:
  The help message of glance gives:
--key-file OS_KEY DEPRECATED! Use --os-key.
--ca-file OS_CACERT   DEPRECATED! Use --os-cacert.
--cert-file OS_CERT   DEPRECATED! Use --os-cert.

  Instead of showing the correct usage in the help text, the correct arguments 
only can be displayed as the optional arguments.
  For example:

  --os-key instead of --key-file

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1583923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583923] [NEW] deprecated arguments in the help message

2016-05-20 Thread Sharat Sharma
Public bug reported:

The help message of glance gives:
  --key-file OS_KEY DEPRECATED! Use --os-key.
  --ca-file OS_CACERT   DEPRECATED! Use --os-cacert.
  --cert-file OS_CERT   DEPRECATED! Use --os-cert.

Instead of showing the correct usage in the help text, the correct arguments 
only can be displayed as the optional arguments.
For example:

--os-key instead of --key-file

** Affects: glance
 Importance: Undecided
 Assignee: Sharat Sharma (sharat-sharma)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Sharat Sharma (sharat-sharma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1583923

Title:
  deprecated arguments in the help message

Status in Glance:
  New

Bug description:
  The help message of glance gives:
--key-file OS_KEY DEPRECATED! Use --os-key.
--ca-file OS_CACERT   DEPRECATED! Use --os-cacert.
--cert-file OS_CERT   DEPRECATED! Use --os-cert.

  Instead of showing the correct usage in the help text, the correct arguments 
only can be displayed as the optional arguments.
  For example:

  --os-key instead of --key-file

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1583923/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583918] [NEW] why all mechanism_drivers in configuration will be excuted when any vnic_type VM created

2016-05-20 Thread QunyingRan
Public bug reported:

Mechanism_drivers some used for ovs VN, and others used for direct, macvtap, 
and so on, but when we create any VM, the all mechanism_drivers will be 
executed and not select suitable Mechanism_driver to bind port. 
For example, 'openvswitch' and 'sriovnicswitch' in ml2 configuration 
mechanism_drivers, and then created a sriov VM, only 'bind_port' in 
'SriovNicSwitchMechanismDriver' should be run,  but we found function 
'bind_port' in OpenvswitchMechanismDriver also has been used.

** Affects: neutron
 Importance: Undecided
 Assignee: QunyingRan (ran-qunying)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => QunyingRan (ran-qunying)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583918

Title:
  why  all mechanism_drivers in configuration will be excuted when any
  vnic_type VM created

Status in neutron:
  New

Bug description:
  Mechanism_drivers some used for ovs VN, and others used for direct, macvtap, 
and so on, but when we create any VM, the all mechanism_drivers will be 
executed and not select suitable Mechanism_driver to bind port. 
  For example, 'openvswitch' and 'sriovnicswitch' in ml2 configuration 
mechanism_drivers, and then created a sriov VM, only 'bind_port' in 
'SriovNicSwitchMechanismDriver' should be run,  but we found function 
'bind_port' in OpenvswitchMechanismDriver also has been used.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583919] [NEW] --no-ssl-compression is deprecated

2016-05-20 Thread Sharat Sharma
Public bug reported:

The glance --help gives the following in optional arguments:

 --no-ssl-compression  DEPRECATED! This option is deprecated and not used
anymore. SSL compression should be disabled by default
by the system SSL library.

The deprecated option must not be shown in the help message to avoid
confusion.

** Affects: glance
 Importance: Undecided
 Assignee: Sharat Sharma (sharat-sharma)
 Status: New

** Changed in: glance
 Assignee: (unassigned) => Sharat Sharma (sharat-sharma)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1583919

Title:
  --no-ssl-compression is deprecated

Status in Glance:
  New

Bug description:
  The glance --help gives the following in optional arguments:

   --no-ssl-compression  DEPRECATED! This option is deprecated and not used
  anymore. SSL compression should be disabled by default
  by the system SSL library.

  The deprecated option must not be shown in the help message to avoid
  confusion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1583919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583915] [NEW] SRIOV : AttributeError: 'dict' object has no attribute 'partition'

2016-05-20 Thread prabhu murthy
Public bug reported:


With the below pci whitelist in nova.conf , getting the below ERROr


pci_passthrough_whitelist = {"address":{"domain": ".*","bus": "04", "slot": 
"00","function": "[1-2]"},"physical_network":"physnet1"}
pci_passthrough_whitelist ={"address":{"domain": ".*","bus": "04", "slot": 
"00","function": "[3-4]"},"physical_network":"physnet2"}


2016-05-20 02:37:30.100 ERROR nova.compute.manager 
[req-67cc2371-7005-4485-bbb0-b19d5075fd09 None None] Error updating resources 
for node stack.
2016-05-20 02:37:30.100 TRACE nova.compute.manager Traceback (most recent call 
last):
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 6275, in 
update_available_resource_for_node
2016-05-20 02:37:30.100 TRACE nova.compute.manager 
rt.update_available_resource(context)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 478, in 
update_available_resource
2016-05-20 02:37:30.100 TRACE nova.compute.manager 
self._update_available_resource(context, resources)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2016-05-20 02:37:30.100 TRACE nova.compute.manager return f(*args, **kwargs)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 499, in 
_update_available_resource
2016-05-20 02:37:30.100 TRACE nova.compute.manager 
self._init_compute_node(context, resources)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 377, in 
_init_compute_node
2016-05-20 02:37:30.100 TRACE nova.compute.manager 
self._setup_pci_tracker(context, resources)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 396, in 
_setup_pci_tracker
2016-05-20 02:37:30.100 TRACE nova.compute.manager self.pci_tracker = 
pci_manager.PciDevTracker(context, node_id=n_id)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/manager.py", line 58, in __init__
2016-05-20 02:37:30.100 TRACE nova.compute.manager self.dev_filter = 
whitelist.Whitelist(CONF.pci_passthrough_whitelist)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/whitelist.py", line 78, in __init__
2016-05-20 02:37:30.100 TRACE nova.compute.manager self.specs = 
self._parse_white_list_from_config(whitelist_spec)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/whitelist.py", line 59, in 
_parse_white_list_from_config
2016-05-20 02:37:30.100 TRACE nova.compute.manager spec = 
devspec.PciDeviceSpec(ds)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/devspec.py", line 134, in __init__
2016-05-20 02:37:30.100 TRACE nova.compute.manager self._init_dev_details()
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/devspec.py", line 159, in _init_dev_details
2016-05-20 02:37:30.100 TRACE nova.compute.manager self.address = 
PciAddress(self.address, pf)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/devspec.py", line 67, in __init__
2016-05-20 02:37:30.100 TRACE nova.compute.manager 
self._init_address_fields(pci_addr)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/devspec.py", line 80, in _init_address_fields
2016-05-20 02:37:30.100 TRACE nova.compute.manager dbs, sep, func = 
pci_addr.partition('.')
2016-05-20 02:37:30.100 TRACE nova.compute.manager AttributeError: 'dict' 
object has no attribute 'partition'

** Affects: nova
 Importance: Undecided
 Status: Invalid


** Tags: pci sriov

** Project changed: python-novaclient => nova

** Tags added: pci sriov

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1583915

Title:
  SRIOV : AttributeError: 'dict' object has no attribute 'partition'

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  
  With the below pci whitelist in nova.conf , getting the below ERROr

  
  pci_passthrough_whitelist = {"address":{"domain": ".*","bus": "04", "slot": 
"00","function": "[1-2]"},"physical_network":"physnet1"}
  pci_passthrough_whitelist ={"address":{"domain": ".*","bus": "04", "slot": 
"00","function": "[3-4]"},"physical_network":"physnet2"}


  2016-05-20 02:37:30.100 ERROR nova.compute.manager 
[req-67cc2371-7005-4485-bbb0-b19d5075fd09 None None] Error updating resources 
for node stack.
  2016-05-20 02:37:30.100 TRACE nova.compute.manager Traceback (most recent 
call last):
  2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 

[Yahoo-eng-team] [Bug 1583915] [NEW] SRIOV : AttributeError: 'dict' object has no attribute 'partition'

2016-05-20 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:


With the below pci whitelist in nova.conf , getting the below ERROr


pci_passthrough_whitelist = {"address":{"domain": ".*","bus": "04", "slot": 
"00","function": "[1-2]"},"physical_network":"physnet1"}
pci_passthrough_whitelist ={"address":{"domain": ".*","bus": "04", "slot": 
"00","function": "[3-4]"},"physical_network":"physnet2"}


2016-05-20 02:37:30.100 ERROR nova.compute.manager 
[req-67cc2371-7005-4485-bbb0-b19d5075fd09 None None] Error updating resources 
for node stack.
2016-05-20 02:37:30.100 TRACE nova.compute.manager Traceback (most recent call 
last):
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 6275, in 
update_available_resource_for_node
2016-05-20 02:37:30.100 TRACE nova.compute.manager 
rt.update_available_resource(context)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 478, in 
update_available_resource
2016-05-20 02:37:30.100 TRACE nova.compute.manager 
self._update_available_resource(context, resources)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
2016-05-20 02:37:30.100 TRACE nova.compute.manager return f(*args, **kwargs)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 499, in 
_update_available_resource
2016-05-20 02:37:30.100 TRACE nova.compute.manager 
self._init_compute_node(context, resources)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 377, in 
_init_compute_node
2016-05-20 02:37:30.100 TRACE nova.compute.manager 
self._setup_pci_tracker(context, resources)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 396, in 
_setup_pci_tracker
2016-05-20 02:37:30.100 TRACE nova.compute.manager self.pci_tracker = 
pci_manager.PciDevTracker(context, node_id=n_id)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/manager.py", line 58, in __init__
2016-05-20 02:37:30.100 TRACE nova.compute.manager self.dev_filter = 
whitelist.Whitelist(CONF.pci_passthrough_whitelist)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/whitelist.py", line 78, in __init__
2016-05-20 02:37:30.100 TRACE nova.compute.manager self.specs = 
self._parse_white_list_from_config(whitelist_spec)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/whitelist.py", line 59, in 
_parse_white_list_from_config
2016-05-20 02:37:30.100 TRACE nova.compute.manager spec = 
devspec.PciDeviceSpec(ds)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/devspec.py", line 134, in __init__
2016-05-20 02:37:30.100 TRACE nova.compute.manager self._init_dev_details()
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/devspec.py", line 159, in _init_dev_details
2016-05-20 02:37:30.100 TRACE nova.compute.manager self.address = 
PciAddress(self.address, pf)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/devspec.py", line 67, in __init__
2016-05-20 02:37:30.100 TRACE nova.compute.manager 
self._init_address_fields(pci_addr)
2016-05-20 02:37:30.100 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/pci/devspec.py", line 80, in _init_address_fields
2016-05-20 02:37:30.100 TRACE nova.compute.manager dbs, sep, func = 
pci_addr.partition('.')
2016-05-20 02:37:30.100 TRACE nova.compute.manager AttributeError: 'dict' 
object has no attribute 'partition'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
SRIOV : AttributeError: 'dict' object has no attribute 'partition'
https://bugs.launchpad.net/bugs/1583915
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582298] Re: Django compressor cannot find custom theme templates

2016-05-20 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/317051
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4b870b12c139bda715c9d68e53daf80b51034d42
Submitter: Jenkins
Branch:master

commit 4b870b12c139bda715c9d68e53daf80b51034d42
Author: Diana Whitten 
Date:   Mon May 16 10:34:21 2016 -0700

Django compressor cannot find custom theme templates

An else condition was missed in the Theme Template loader.  Some
templates that are loaded from a theme do start with '/', so
another elif was added to catching this possibility.

Closes-bug: #1582298

Change-Id: I74e147d5abdcb2ab12f1a0c8e7af7fc7f89aff1e


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1582298

Title:
  Django compressor cannot find custom theme templates

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Looks like django compressor can't find templates files from custom
  theme. When I try to run .manage.py compress I have the following
  errors:

  `Non-existent template at:
  
/home/paul/horizon/openstack_dashboard/themes/custom/templates/header/_header.html`

  And Horizon can find that template because I see that overriden page
  has been changed.

  If I run it with material theme with followinf settings:
  AVAILABLE_THEMES = [
  ('custom', 'Custom', 'themes/custom'),
  ('material', 'Material', 'themes/material')
  ]
  DEFAULT_THEME = 'material'

  I will get the following errors:

  Non-existent template at: 
/home/paul/horizon/openstack_dashboard/themes/material/templates/header/_header.html
  Non-existent template at: 
/home/paul/horizon/openstack_dashboard/themes/material/templates/auth/_splash.html
  Non-existent template at: 
/home/paul/horizon/openstack_dashboard/themes/material/templates/horizon/_sidebar.html
  Non-existent template at: 
/home/paul/horizon/openstack_dashboard/themes/material/templates/header/_brand.html

  If I remove DEFAULT_THEME and AVAILABLE_THEMES consts, there is no
  errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1582298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp