[Yahoo-eng-team] [Bug 1738135] [NEW] Should not get domain_scoped token when OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT is FALSE

2017-12-13 Thread chaoliu
Public bug reported:

When debug horizon, there's a mesage: "Attempted scope to domain Default
failed, will attemptto scope to another domain."

(horizon)[root@k1 /]# manage.py runserver 0.0.0.0:8080 --insecure --nothreading 
--noreload
Performing system checks...

System check identified no issues (0 silenced).
October 25, 2017 - 19:10:34
Django version 1.8.14, using settings 'openstack_dashboard.settings'
Starting development server at http://0.0.0.0:8080/
Quit the server with CONTROL-C.
Attempted scope to domain Default failed, will attemptto scope to another 
domain.
Login successful for user "admin", remote address 192.168.233.1.
[25/Oct/2017 19:10:42] "POST /auth/login/ HTTP/1.1" 302 0
[25/Oct/2017 19:10:42] "GET / HTTP/1.1" 302 0


This message is caused by trying to get domain_scoped token when 
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT is FALSE.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1738135

Title:
  Should not get domain_scoped token when
  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT is FALSE

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When debug horizon, there's a mesage: "Attempted scope to domain
  Default failed, will attemptto scope to another domain."

  (horizon)[root@k1 /]# manage.py runserver 0.0.0.0:8080 --insecure 
--nothreading --noreload
  Performing system checks...

  System check identified no issues (0 silenced).
  October 25, 2017 - 19:10:34
  Django version 1.8.14, using settings 'openstack_dashboard.settings'
  Starting development server at http://0.0.0.0:8080/
  Quit the server with CONTROL-C.
  Attempted scope to domain Default failed, will attemptto scope to another 
domain.
  Login successful for user "admin", remote address 192.168.233.1.
  [25/Oct/2017 19:10:42] "POST /auth/login/ HTTP/1.1" 302 0
  [25/Oct/2017 19:10:42] "GET / HTTP/1.1" 302 0

  
  This message is caused by trying to get domain_scoped token when 
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT is FALSE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1738135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1219604] Re: Support image versions / large numbers of images

2017-12-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1219604

Title:
  Support image versions / large numbers of images

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  When we have multiple image versions (for example, a daily debian
  wheezy image, or a periodically refreshed Jenkins image), we want to
  keep them all around in Glance in case users want to launch a specific
  version.  But the typical user will only want to launch the latest
  image.

  The horizon UI doesn't seem very well attuned to having more than a
  handful of images.  As far as I can tell, every image is displayed in
  the "launch-server" drop-down, and only the image name is shown, so
  everything must be encoded in the name.  For a large number of images,
  a drop-down doesn't really scale.

  On the image page, maybe we should hide "obsolete" images by default.
  It would also be nice if there was a search box,  so you could type a
  few characters to narrow the image list.

  How do people cope with large numbers of images?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1219604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1116467] Re: Unpredictable Dashboard behaviour.

2017-12-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1116467

Title:
  Unpredictable Dashboard behaviour.

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  OpenStack Dashboard shows unpredictable behaviour when it is reloaded
  15-20 times. It works fine sometime while sometimes it throws the
  following error.

  ImportError at /

  cannot import name keystone

  Request Method:   GET
  Request URL:  http://localhost:1/horizon/
  Django Version:   1.4.1
  Exception Type:   ImportError
  Exception Value:

  cannot import name keystone

  Exception Location:   
/usr/lib/python2.7/dist-packages/horizon/dashboards/settings/project/views.py 
in , line 17
  Python Executable:/usr/bin/python
  Python Version:   2.7.3
  Python Path:

  ['/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../..',
   '/usr/lib/python2.7',
   '/usr/lib/python2.7/plat-linux2',
   '/usr/lib/python2.7/lib-tk',
   '/usr/lib/python2.7/lib-old',
   '/usr/lib/python2.7/lib-dynload',
   '/usr/local/lib/python2.7/dist-packages',
   '/usr/lib/python2.7/dist-packages',
   '/usr/share/openstack-dashboard/',
   '/usr/share/openstack-dashboard/openstack_dashboard']

  Dashboard is installed using ubuntu packages.
  openstack-dashboard  2012.2.1-0ubuntu1~cloud0
  openstack-dashboard-ubuntu-theme  2012.2.1-0ubuntu1~cloud0
  python-django-horizon2012.2.1-0ubuntu1~cloud0
  python-openstack-auth1.0.1-0ubuntu6~cloud0

  Server: apache2

  root@be-openstack01:~# tail -f  /var/log/apache2/error.log
  [Thu Feb 07 13:44:06 2013] [error] urlpatterns = 
self._get_default_urlpatterns()
  [Thu Feb 07 13:44:06 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/base.py", line 96, in 
_get_default_urlpatterns
  [Thu Feb 07 13:44:06 2013] [error] urls_mod = import_module('.urls', 
package_string)
  [Thu Feb 07 13:44:06 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in 
import_module
  [Thu Feb 07 13:44:06 2013] [error] __import__(name)
  [Thu Feb 07 13:44:06 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/dashboards/settings/project/urls.py", 
line 19, in 
  [Thu Feb 07 13:44:06 2013] [error] from .views import OpenRCView
  [Thu Feb 07 13:44:06 2013] [error]   File 
"/usr/lib/python2.7/dist-packages/horizon/dashboards/settings/project/views.py",
 line 17, in 
  [Thu Feb 07 13:44:06 2013] [error] from horizon.api import keystone
  [Thu Feb 07 13:44:06 2013] [error] ImportError: cannot import name keystone

  
  Release: folsom

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1116467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1160939] Re: Horizon needs to consider handling volume attachment where volume could be attached to a physical server not vm instance in nova.

2017-12-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1160939

Title:
  Horizon needs to consider handling volume attachment where volume
  could be attached to a physical server not vm instance in nova.

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Horizon volume management currently assumes that the cinder volume is 
attached to a vm instances. Where it retrieves the attached information from 
the nova db (based on the instance ID). Cinder is a generic volume service and 
in the future the instance id could be a bare-metal server not necessarily 
managed by  nova.
  Currently, horizon throws an exception “ Error: Unable to retrieve attachment 
information.”, Horizon needs to handle this path better. For example, if it is 
unable to retrieve the instance name, it needs to show the instance id or allow 
others to define source where the information could be provided.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1160939/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1117459] Re: Distinct behavior between client and server side filters

2017-12-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1117459

Title:
  Distinct behavior between client and server side filters

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  The Server side implementation for the filter on 'Users' and
  'Projetcs' defines only some columns to be filtered, the client side
  filter implemented on horizon/static/horizon/js/horizon.tables.js is
  using all available columns.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1117459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202949] Re: Get password action not available for instances.

2017-12-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1202949

Title:
  Get password action not available for instances.

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Admin user should have ability to get a server's password, when common
  user forgot it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1202949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335618] Re: Horizon: Disable buttons when user doesn't fill mandatory fields

2017-12-13 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1335618

Title:
  Horizon: Disable buttons when user doesn't fill mandatory fields

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  Description of problem:
  ===
  Many dialogs have mandatory fields. 
  Buttons in these dialogs should be disabled until user fill the mandatory 
fields. 

  Version-Release number of selected component (if applicable):
  
  python-django-horizon-2014.1-7.el7ost.noarch
  openstack-dashboard-2014.1-7.el7ost.noarch

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. Login to Horizon
  2. Click 'Project' --> 'Compute' --> 'Images'
  3. Click 'Create Image'

  Actual results:
  ===
  Mandatory fields are empty but 'Create Image' button in enabled

  Expected results:
  =
  The button should be disabled until user fill the mandatory fields.  

  
  Additional info:
  
  Two screenshots are enclosed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1335618/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737856] Re: Listing instances with a marker doesn't nix the marker if it's found in build_requests

2017-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/527564
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1706e3989157f912bce99beb99c75216a064eb2d
Submitter: Zuul
Branch:master

commit 1706e3989157f912bce99beb99c75216a064eb2d
Author: Matt Riedemann 
Date:   Tue Dec 12 21:27:28 2017 -0500

Raise MarkerNotFound if BuildRequestList.get_by_filters doesn't find marker

For some reason, probably because build requests are meant to be short lived
and we don't get a lot of bugs about paging misbehavior, when paging 
instances
with a marker, we didn't raise MarkerNotFound if we didn't find the marker 
in
the list of build requests. Doing so would match what we do when paging over
cells and listing instances using a marker. Once we find the marker, be that
in build_requests, or one of the cells, we need to set the marker to None to
stop looking for it elsewhere if we have more space to fill our limit.

For example, see change I8a957bebfcecd6ac712103c346e028d80f1ecd7c.

This patch fixes the issue by raising MarkerNotFound from BuildRequestList
get_by_filters if there is a marker and we didn't find a build request for
it. The compute API get_all() method handles that as normal and continues
looking for the marker in one of the cells.

Change-Id: I1aa3ca6cc70cef65d24dec1e7db9491c9b73f7ab
Closes-Bug: #1737856


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1737856

Title:
  Listing instances with a marker doesn't nix the marker if it's found
  in build_requests

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed

Bug description:
  When listing instances, we start with the build requests and then hit
  the cells.

  If we're given a marker, we use it to trim the build_requests:

  
https://github.com/openstack/nova/blob/master/nova/objects/build_request.py#L440-L457

  But normally if you're looking for a marker and don't find it, that
  get_by_filters code should raise MarkerNotFound to indicate to the
  caller that you asked to filter with a marker which isn't here.

  If we got results back from build_requests with a marker, then the
  compute API get_all() method should null out the marker and continue
  filling the limit in the cells, like what we'd do here:

  https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2426

  And this is how it's handled within a cell database:

  
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L2242-L2257

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1737856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1719561] Re: Instance action's updated_at doesn't updated when action created or action event updated.

2017-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/507473
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=1a4ae60e1b48efb3d80cd36a285eda7ef5620f12
Submitter: Zuul
Branch:master

commit 1a4ae60e1b48efb3d80cd36a285eda7ef5620f12
Author: Yikun Jiang 
Date:   Tue Sep 26 18:33:00 2017 +0800

Update Instance action's updated_at when action event updated.

When we do some operation on instances, will record some
instance action(such as 'create') in 'instance_actions' table,
and some sub-event will record(such as
compute__do_build_and_run_instance) in 'instance_actions_events'
table.

we need update the instance action's updated_at when instance
action events are created and instance action created or finished.

Change-Id: I75a827b759b59773c08ffc6b1e3e54d6189b5853
Closes-Bug: #1719561


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1719561

Title:
  Instance action's updated_at doesn't updated when action created or
  action event updated.

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  When we do some operation on instances, will record some
  instance action(such as 'create') in 'instance_actions' table,
  and some sub-event will record(such as
  compute__do_build_and_run_instance) in 'instance_actions_events'
  table.

  The timestamp(change-since) filter will be added for filtering
  the action results based on updated_at(the last time the instance
  action was updated). But now the record updated time doesn't be
  updated when action creating or sub-event of action updating.

  So, in this patch, we need update the instance action's updated_at
  when instance action events are created and instance action created
  or finished.

  Steps to reproduce
  ==
  1. Create a instances
  nova boot --image 81e58b1a-4732-4255-b4f8-c844430485d2 --flavor 1 yikun

  2. Look up record in instance_actions and instance_actions_events
  mysql> select * from instance_actions\G
  *** 1. row ***
     created_at: 2017-09-25 07:16:07
     updated_at: NULL--->  here
     deleted_at: NULL
     id: 48
     action: create
  instance_uuid: fdd52ec6-100b-4a25-a5db-db7c5ad17fa8
     request_id: req-511dee3e-8951-4360-b72b-3a7ec091e7c8
    user_id: 1687f2a66222421790475760711e40e5
     project_id: 781b620d86534d549dd64902674c0f69
     start_time: 2017-09-25 07:16:05
    finish_time: NULL
    message: NULL
    deleted: 0

  mysql> select * from instance_actions_events\G
  *** 1. row ***
     created_at: 2017-09-25 07:16:07
     updated_at: 2017-09-25 07:16:22
     deleted_at: NULL
     id: 1
     action: create
  instance_uuid: fdd52ec6-100b-4a25-a5db-db7c5ad17fa8
     request_id: req-511dee3e-8951-4360-b72b-3a7ec091e7c8
    user_id: 1687f2a66222421790475760711e40e5
     project_id: 781b620d86534d549dd64902674c0f69
     start_time: 2017-09-25 07:16:05
    finish_time: NULL
    message: NULL
    deleted: 0

  Expected result
  ===
  Update the instance action's updated_at when instance action events
  are started or finished or instance action created.

  Actual result
  =
  without instance aciton's updated_at

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1719561/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738094] [NEW] TEXT is not large enough to store RequestSpec

2017-12-13 Thread Dan Smith
Public bug reported:

This error occurs during Newton's online_data_migration phase:

error: (pymysql.err.DataError) (1406, u"Data too long for column 'spec'
at row 1") [SQL: u'INSERT INTO request_specs

Which comes from RequestSpec.instance_group.members being extremely
large

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1738094

Title:
  TEXT is not large enough to store RequestSpec

Status in OpenStack Compute (nova):
  New

Bug description:
  This error occurs during Newton's online_data_migration phase:

  error: (pymysql.err.DataError) (1406, u"Data too long for column
  'spec' at row 1") [SQL: u'INSERT INTO request_specs

  Which comes from RequestSpec.instance_group.members being extremely
  large

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1738094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738083] [NEW] DBDeadlock when when syncing traits in Placement during list_allocation_candidates

2017-12-13 Thread Matt Riedemann
Public bug reported:

This killed a scheduling request so we resulted with a NoValidHost:

http://logs.openstack.org/64/527564/1/gate/legacy-tempest-dsvm-
py35/7db2d64/logs/screen-placement-api.txt.gz#_Dec_13_17_07_40_968321

It looks like it blows up here:

Dec 13 17:07:40.973678 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/api/openstack/placement/handlers/allocation_candidate.py",
 line 217, in list_allocation_candidates
Dec 13 17:07:40.973796 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler cands = 
rp_obj.AllocationCandidates.get_by_requests(context, requests)
Dec 13 17:07:40.973893 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/objects/resource_provider.py", line 3182, in 
get_by_requests
Dec 13 17:07:40.973969 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler _ensure_trait_sync(context)
Dec 13 17:07:40.974045 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/objects/resource_provider.py", line 135, in 
_ensure_trait_sync
Dec 13 17:07:40.974140 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler _trait_sync(ctx)
Dec 13 17:07:40.974218 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler   File 
"/usr/local/lib/python3.5/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 984, in wrapper
Dec 13 17:07:40.974294 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler return fn(*args, **kwargs)
Dec 13 17:07:40.974366 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/objects/resource_provider.py", line 108, in 
_trait_sync

Due to this deadlock:

oslo_db.exception.DBDeadlock: (pymysql.err.InternalError) (1213,
'Deadlock found when trying to get lock; try restarting transaction')
[SQL: 'INSERT INTO traits (created_at, name) VALUES (%(created_at)s,
%(name)s)'] [parameters: ({'created_at': datetime.datetime(2017, 12, 13,
17, 7, 40, 954357), 'name': 'HW_GPU_API_CUDA_V2_1'}, {'created_at':
datetime.datetime(2017, 12, 13, 17, 7, 40, 954363), 'name':
'HW_CPU_X86_TSX'}, {'created_at': datetime.datetime(2017, 12, 13, 17, 7,
40, 954365), 'name': 'HW_CPU_X86_AVX512ER'}, {'created_at':
datetime.datetime(2017, 12, 13, 17, 7, 40, 954367), 'name':
'HW_NIC_OFFLOAD_GRO'}, {'created_at': datetime.datetime(2017, 12, 13,
17, 7, 40, 954369), 'name': 'HW_GPU_API_DIRECT3D_V11_2'}, {'created_at':
datetime.datetime(2017, 12, 13, 17, 7, 40, 954371), 'name':
'HW_GPU_API_OPENGL_V4_4'}, {'created_at': datetime.datetime(2017, 12,
13, 17, 7, 40, 954373), 'name': 'HW_GPU_API_CUDA_V1_2'}, {'created_at':
datetime.datetime(2017, 12, 13, 17, 7, 40, 954375), 'name':
'HW_CPU_X86_AVX512VL'}  ... displaying 10 of 163 total bound parameter
sets ...  {'created_at': datetime.datetime(2017, 12, 13, 17, 7, 40,
954663), 'name': 'HW_NIC_OFFLOAD_FDF'}, {'created_at':
datetime.datetime(2017, 12, 13, 17, 7, 40, 954665), 'name':
'HW_GPU_API_OPENGL_V4_0'})]

** Affects: nova
 Importance: High
 Status: Triaged


** Tags: db placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1738083

Title:
  DBDeadlock when when syncing traits in Placement during
  list_allocation_candidates

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  This killed a scheduling request so we resulted with a NoValidHost:

  http://logs.openstack.org/64/527564/1/gate/legacy-tempest-dsvm-
  py35/7db2d64/logs/screen-placement-api.txt.gz#_Dec_13_17_07_40_968321

  It looks like it blows up here:

  Dec 13 17:07:40.973678 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/api/openstack/placement/handlers/allocation_candidate.py",
 line 217, in list_allocation_candidates
  Dec 13 17:07:40.973796 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler cands = 
rp_obj.AllocationCandidates.get_by_requests(context, requests)
  Dec 13 17:07:40.973893 ubuntu-xenial-citycloud-sto2-0001423712 
devstack@placement-api.service[14690]: ERROR 
nova.api.openstack.placement.handler   File 
"/opt/stack/new/nova/nova/objects/resource_provider.py", line 3182, in 
get_by_re

[Yahoo-eng-team] [Bug 1645630] Re: Material Theme in Newton Appears to be Broken

2017-12-13 Thread Akihiro Motoki
python manage.py compress works well both with newton-eol and
stable/ocata horizon. I specified upper-constraints.txt from
stable/newton and stable/ocata respectively. I believe this bug is no
longer valid.

** Changed in: horizon
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1645630

Title:
  Material Theme in Newton Appears to be Broken

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Pulled down project from git (stable/newton).

  Used OpenStack global requirements and installed in a virtualenv under
  Ubuntu 14.04 (python 2.7).

  Problem manifested during compress operation, when setting up the web
  application for use.

  Code excerpt attached.

  To workaround, I explicitly disabled the Material theme.

  
https://gist.githubusercontent.com/jjahns/025e51e0b82009dd17a30651c2256262/raw/d02d96dcba8c2f68a0b68a3b175e6ed1c77190fc/gistfile1.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1645630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1664931] Re: [OSSA-2017-005] nova rebuild ignores all image properties and scheduler filters (CVE-2017-16239)

2017-12-13 Thread Corey Bryant
** Changed in: nova (Ubuntu)
   Status: Triaged => Fix Committed

** Changed in: nova (Ubuntu)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1664931

Title:
  [OSSA-2017-005] nova rebuild ignores all image properties and
  scheduler filters (CVE-2017-16239)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Fix Committed
Status in OpenStack Compute (nova) ocata series:
  Fix Committed
Status in OpenStack Compute (nova) pike series:
  Fix Committed
Status in OpenStack Security Advisory:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  Big picture: If some image has some restriction on aggregates or hosts
  it can be run on, tenant may use  nova rebuild command to circumvent
  those restrictions. Main issue is with ImagePropertiesFilter, but it
  may cause issues with combination of flavor/image (for example allows
  to run license restricted OS (Windows) on host which has no such
  license, or rebuild instance with cheap flavor with image which is
  restricted only for high-priced flavors).

  I don't know if this is a security bug or not, if you would find it
  non-security issue, please remove the security flag.

  Steps to reproduce:

  1. Set up nova with  ImagePropertiesFilter or IsolatedHostsFilter active. 
They should allows to run 'image1' only on 'host1', but never on 'host2'.
  2. Boot instance with some other (non-restricted) image on 'host2'.
  3. Use nova rebuild INSTANCE image1

  Expected result:

  nova rejects rebuild because given image ('image1') may not run on
  'host2'.

  Actual result:

  nova happily rebuild instance with image1 on host2, violating
  restrictions.

  Checked affected version: mitaka.

  I believe, due to the way 'rebuild' command is working, newton and
  master are affected too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1664931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543756] Re: [RFE] RBAC: Allow user to create port from specific subnet on shared network

2017-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/432850
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8236e83deced9af84ae0e5128c76acfa753093cc
Submitter: Zuul
Branch:master

commit 8236e83deced9af84ae0e5128c76acfa753093cc
Author: Reedip 
Date:   Mon Feb 13 00:38:54 2017 -0500

Allow port create/update by shared nw owners

Currently if a new port is created by a tenant with whom
the network is shared (tenant is not the owner but has
network shared via RBAC) , the port is allocated on the default
subnet. This patch allows the tenant to create/update a port on
any subnet which is actually a part of a shared network, owned by
another tenant.
Tempest test in [1]

[1]: https://review.openstack.org/521413
Change-Id: I1046f6b13e68b1e274cc8f62f5b30aa5f8d71cdc
Closes-Bug: #1543756


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543756

Title:
  [RFE] RBAC: Allow user to create port from specific subnet on shared
  network

Status in neutron:
  Fix Released

Bug description:
  The network demo-net, owned by user demo, is shared with tenant
  demo-2.  The sharing is created by demo using the command

  neutron rbac-create --type network --action access_as_shared --target-
  tenant  demo-net

  
  A user on the demo-2 tenant is can see the network demo-net:

  stack@Ubuntu-38:~/DEVSTACK/demo$ neutron net-list
  
+--+--+--+
  | id   | name | subnets   
   |
  
+--+--+--+
  | 85bb7612-e5fa-440c-bacf-86c5929298f3 | demo-net | 
e66487b6-430b-4fb1-8a87-ed28dd378c43 10.1.2.0/24 |
  |  |  | 
ff01f7ca-d838-42dc-8d86-1b2830bc4824 10.1.3.0/24 |
  | 5beb4080-4cf0-4921-9bbf-a7f65df6367f | public   | 
57485a80-815c-45ef-a0d1-ce11939d7fab |
  |  |  | 
38d1ddad-8084-4d32-b142-240e16fcd5df |
  
+--+--+--+


  
  The owner of network demo-net is able to create a port using the command 
'neutron port-create demo-net --fixed-ip ... :
  stack@Ubuntu-38:~/DEVSTACK/devstack$ neutron port-create demo-net --fixed-ip 
subnet_id=ff01f7ca-d838-42dc-8d86-1b2830bc4824
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:vnic_type | normal  
|
  | device_id | 
|
  | device_owner  | 
|
  | dns_name  | 
|
  | fixed_ips | {"subnet_id": 
"ff01f7ca-d838-42dc-8d86-1b2830bc4824", "ip_address": "10.1.3.6"} |
  | id| 37402f22-fcd5-4b01-8b01-c6734573d7a8
|
  | mac_address   | fa:16:3e:44:71:ad   
|
  | name  | 
|
  | network_id| 85bb7612-e5fa-440c-bacf-86c5929298f3
|
  | security_groups   | 7db11aa0-3d0d-40d1-ae25-e4c02b8886ce
|
  | status| DOWN
|
  | tenant_id | 54913ee1ca89458ba792d685c799484d
|
  
+---+-+


  The user demo-2 of tenant demo-2 is able to create a port using the
  network demo-net:

  stack@Ubuntu-38:~/DEVSTACK/demo$ neutron port-create demo-net
  Created a new port:
  
+---+-+
  | Field

[Yahoo-eng-team] [Bug 1732890] Re: floatingip-create:Ignore floating_ip_address when specifying both floating_ip_address and subnet

2017-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/521707
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=088e317cd2dd8488feb29a4fa6600227d1810479
Submitter: Zuul
Branch:master

commit 088e317cd2dd8488feb29a4fa6600227d1810479
Author: Dongcan Ye 
Date:   Tue Nov 21 11:46:56 2017 +0800

Honor both floating_ip_address and subnet_id when creating FIP

In the current code, if user specifies floating-ip-address
and subnet, we only process the subnet when creating
the fip port.

This patch adds floating_ip_address and subnet_id to
fip port's fixed_ips, if floating_ip_address is not in the subnet,
InvalidIpForSubnet exception will be raised.

This patch also fixes a default value error in tests.

Change-Id: I436353690839281ca7e13eaf792249306b71dd4b
Closes-Bug: #1732890


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1732890

Title:
  floatingip-create:Ignore floating_ip_address when specifying both
  floating_ip_address and subnet

Status in neutron:
  Fix Released

Bug description:
  When I created floating ip with "floating_ip_address" and "subnet", it 
ignored "floating_ip_address".
  $neutron floatingip-create --floating-ip-address 172.24.4.25 --subnet 
d5ece368-35fb-4537-be84-eda656250974
  Created a new floatingip:
  +-+--+
  | Field   | Value|
  +-+--+
  | created_at  | 2017-11-17T09:42:57Z |
  | description |  |
  | fixed_ip_address|  |
  | floating_ip_address | 172.24.4.10  |
  | floating_network_id | fa18e1d7-1f33-48c0-a77f-f192f3c1c6df |
  | id  | 4d6129a4-9076-4e79-b3f0-b05ce68deb05 |
  | port_id |  |
  | project_id  | f0f9361fbf8e495b97eeadae6a81e14d |
  | revision_number | 1|
  | router_id   |  |
  | status  | DOWN |
  | tenant_id   | f0f9361fbf8e495b97eeadae6a81e14d |
  | updated_at  | 2017-11-17T09:42:57Z |
  +-+--+

  This is my REQ:
  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://10.10.10.7:9696/v2.0/floatingips.json -H "User-Agent: 
python-neutronclient" -H "Content-Type: application/json" -H "Accept: 
application/json" -H "X-Auth-Token: 
{SHA1}0996a50cdaac248681cedb7000dbe71c7bd1a3e0" -d '{"floatingip": 
{"floating_network_id": "fa18e1d7-1f33-48c0-a77f-f192f3c1c6df", "subnet_id": 
"d5ece368-35fb-4537-be84-eda656250974", "floating_ip_address": "172.24.4.25"}}'
  And this is my RESP:
  RESP BODY: {"floatingip": {"router_id": null, "status": "DOWN", 
"description": "", "tenant_id": "f0f9361fbf8e495b97eeadae6a81e14d", 
"created_at": "2017-11-17T09:42:57Z", "updated_at": "2017-11-17T09:42:57Z", 
"floating_network_id": "fa18e1d7-1f33-48c0-a77f-f192f3c1c6df", 
"fixed_ip_address": null, "floating_ip_address": "172.24.4.10", 
"revision_number": 1, "project_id": "f0f9361fbf8e495b97eeadae6a81e14d", 
"port_id": null, "id": "4d6129a4-9076-4e79-b3f0-b05ce68deb05"}}

  I think we should make sure the "floating_ip_address" belongs to the "subnet" 
and create it.
  Or we should report a error message when we set these parameters at the same 
time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1732890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733933] Re: nova-conductor is masking error when rescheduling

2017-12-13 Thread Matt Riedemann
Running nova in a split MQ "super conductor" mode outside of devstack is
not required, so I wouldn't consider this a critical regression, it's
something that deployers have to opt into based on how they setup their
nova deployment.

** Changed in: nova
   Status: Confirmed => Won't Fix

** Changed in: nova
   Status: Won't Fix => Triaged

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1733933

Title:
  nova-conductor is masking error when rescheduling

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Sometimes when build_instance fails on n-cpu, the error that n-cond
  receives is mangles like this:

  Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: ERROR 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
  [instance: 5ee9d527-0043-474e-bfb3-e6621426662e] Error from last host: 
jh-devstack-03 (node jh-devstack03): [u'Traceback (most recent call last):\n', 
u'  File "/opt/stack/nova/nova/compute/manager.py", line 1847, in
   _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 2086, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', 
  u"RescheduledException: Build of instance 
5ee9d527-0043-474e-bfb3-e6621426662e was re-scheduled: operation failed: domain 
'instance-0028' already exists with uuid 
  93974d36e3a7-4139bbd8-2d5b51195a5f\n"]
  Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: WARNING 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
  Failed to compute_task_build_instances: No sql_connection parameter is 
established: CantStartEngineError: No sql_connection parameter is established
  Nov 22 17:39:04 jh-devstack-03 nova-conductor[26556]: WARNING 
nova.scheduler.utils [None req-fd8acb29-8a2c-4603-8786-54f2580d0ea9 
tempest-FloatingIpSameNetwork-1597192363 
tempest-FloatingIpSameNetwork-1597192363] 
  [instance: 5ee9d527-0043-474e-bfb3-e6621426662e] Setting instance to ERROR 
state.: CantStartEngineError: No sql_connection parameter is established

  Seem to occur quite often in gate, too.
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Setting%20instance%20to%20ERROR%20state.%3A%20CantStartEngineError%5C%22

  The result is that the instance information shows "No sql_connection
  parameter is established" instead of the original error, making
  debugging the root cause quite difficult.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1733933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738023] [NEW] neutron-lib: bogus error when OVO primary key missing

2017-12-13 Thread Thomas Morin
Public bug reported:

I see the following in logs:

Dec 13 11:39:27.531721 ubuntu-xenial-rax-iad-0001416544 neutron-
server[5115]: ERROR
networking_bgpvpn.neutron.services.service_drivers.bagpipe.bagpipe
NeutronPrimaryKeyMissing: For class str missing primary keys:
set(['network_id'])


The 'for class str' is bogus, and seems to be due to the fact that at [1] we 
have

raise o_exc.NeutronPrimaryKeyMissing(object_class=cls.__name__,
 missing_keys=missing_keys)

instead of

raise o_exc.NeutronPrimaryKeyMissing(object_class=cls,
 missing_keys=missing_keys)

[1]
https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L514

** Affects: neutron
 Importance: Undecided
 Assignee: Thomas Morin (tmmorin-orange)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1738023

Title:
  neutron-lib: bogus error when OVO primary key missing

Status in neutron:
  In Progress

Bug description:
  I see the following in logs:

  Dec 13 11:39:27.531721 ubuntu-xenial-rax-iad-0001416544 neutron-
  server[5115]: ERROR
  networking_bgpvpn.neutron.services.service_drivers.bagpipe.bagpipe
  NeutronPrimaryKeyMissing: For class str missing primary keys:
  set(['network_id'])

  
  The 'for class str' is bogus, and seems to be due to the fact that at [1] we 
have

  raise o_exc.NeutronPrimaryKeyMissing(object_class=cls.__name__,
   missing_keys=missing_keys)

  instead of

  raise o_exc.NeutronPrimaryKeyMissing(object_class=cls,
   missing_keys=missing_keys)

  [1]
  https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L514

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1738023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736976] Re: test_live_migration_actions functional test randomly fails with "AssertionError: The migration table left empty."

2017-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/527440
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=937db90146f04783183644bfeff78b1f6bf854b8
Submitter: Zuul
Branch:master

commit 937db90146f04783183644bfeff78b1f6bf854b8
Author: Balazs Gibizer 
Date:   Tue Dec 12 16:33:55 2017 +0100

Stabilize test_live_migration_abort func test

The test_live_migration_abort test step in the
test_live_migration_actions notification sample test was unstable.
The test starts a live migration and then deletes the migration object
via the REST API to abort it. The test randomly failed to find the
migration object on the REST API. Based on the comparision of the logs
of the successful and unsuccesful runs it was visible that in the
unsuccesful case the test gave up to wait for the migration object too
early. In a succesful run it took 1.5 seconds after the the migration
API call to have the migration object appear on the API while in an
unsuccessful case the test gave up after 1 second.

This early give up was cause by the fact that the loop trying to
get a migration does not applied any delay between such trials and
therefore the 20 attempts run out quickly.

This patch introduces a short sleep between trials to stabilize the
test.

Change-Id: I6be3b236d8eadcde5714c08069708dff303dfd4d
Closes-Bug: #1736976


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1736976

Title:
  test_live_migration_actions functional test randomly fails with
  "AssertionError: The migration table left empty."

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/85/330285/175/check/openstack-tox-
  functional-py35/9a23bfd/testr_results.html.gz

  ft1.1: 
nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSampleWithMultipleComputeOldAttachFlow.test_live_migration_actions_StringException:
 pythonlogging:'': {{{
  2017-12-07 10:10:57,771 WARNING [oslo_config.cfg] Config option 
key_manager.api_class  is deprecated. Use option key_manager.backend instead.
  2017-12-07 10:10:59,554 INFO [nova.service] Starting conductor node (version 
17.0.0)
  2017-12-07 10:10:59,571 INFO [nova.service] Starting scheduler node (version 
17.0.0)
  2017-12-07 10:10:59,595 INFO [nova.network.driver] Loading network driver 
'nova.network.linux_net'
  2017-12-07 10:10:59,596 INFO [nova.service] Starting network node (version 
17.0.0)
  2017-12-07 10:10:59,624 INFO [nova.virt.driver] Loading compute driver 
'fake.FakeLiveMigrateDriver'
  2017-12-07 10:10:59,624 INFO [nova.service] Starting compute node (version 
17.0.0)
  2017-12-07 10:10:59,647 WARNING [nova.compute.manager] No compute node record 
found for host compute. If this is the first time this service is starting on 
this host, then you can ignore this warning.
  2017-12-07 10:10:59,647 WARNING [nova.compute.monitors] Excluding 
nova.compute.monitors.cpu monitor virt_driver. Not in the list of enabled 
monitors (CONF.compute_monitors).
  2017-12-07 10:10:59,651 WARNING [nova.compute.resource_tracker] No compute 
node record for compute:fake-mini
  2017-12-07 10:10:59,654 INFO [nova.compute.resource_tracker] Compute node 
record created for compute:fake-mini with uuid: 
6dfc5763-1a0b-47e1-941f-af8b88f2cf6e
  2017-12-07 10:10:59,672 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/6dfc5763-1a0b-47e1-941f-af8b88f2cf6e" status: 404 
len: 227 microversion: 1.0
  2017-12-07 10:10:59,677 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "POST /placement/resource_providers" status: 201 len: 0 microversion: 
1.0
  2017-12-07 10:10:59,678 INFO [nova.scheduler.client.report] 
[req-583c10f0-5400-48e0-9e17-9ca12bbd4c98] Created resource provider record via 
placement API for resource provider with UUID 
6dfc5763-1a0b-47e1-941f-af8b88f2cf6e and name fake-mini.
  2017-12-07 10:10:59,684 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/6dfc5763-1a0b-47e1-941f-af8b88f2cf6e/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-12-07 10:10:59,692 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/6dfc5763-1a0b-47e1-941f-af8b88f2cf6e/inventories" 
status: 200 len: 54 microversion: 1.0
  2017-12-07 10:10:59,702 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "PUT 
/placement/resource_providers/6dfc5763-1a0b-47e1-941f-af8b88f2cf6e/inventories" 
status: 200 len: 413 microversion: 1.0
  2017-12-07 10:10:59,725 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/6dfc5763-1a0b-47e1-941f-af8b88f2cf6e/allocations" 
status: 200 len: 54 microversion: 1.0
  2017-12-07 10:

[Yahoo-eng-team] [Bug 1714248] Re: Compute node HA for ironic doesn't work due to the name duplication of Resource Provider

2017-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/508555
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e3c5e22d1fde7ca916a8cc364f335fba8a3a798f
Submitter: Zuul
Branch:master

commit e3c5e22d1fde7ca916a8cc364f335fba8a3a798f
Author: John Garbutt 
Date:   Fri Sep 29 15:48:54 2017 +0100

Re-use existing ComputeNode on ironic rebalance

When a nova-compute service dies that is one of several ironic based
nova-compute services running, a node rebalance occurs to ensure there
is still an active nova-compute service dealing with requests for the
given instance that is running.

Today, when this occurs, we create a new ComputeNode entry. This change
alters that logic to detect the case of the ironic node rebalance and in
that case we re-use the existing ComputeNode entry, simply updating the
host field to match the new host it has been rebalanced onto.

Previously we hit problems with placement when we get a new
ComputeNode.uuid for the same ironic_node.uuid. This reusing of the
existing entry keeps the ComputeNode.uuid the same when the rebalance of
the ComputeNode occurs.

Without keeping the same ComputeNode.uuid placement errors out with a 409
because we attempt to create a ResourceProvider that has the same name
as an existing ResourceProvdier. Had that worked, we would have noticed
the race that occurs after we create the ResourceProvider but before we
add back the existing allocations for existing instances. Keeping the
ComputeNode.uuid the same means we simply look up the existing
ResourceProvider in placement, avoiding all this pain and tears.

Closes-Bug: #1714248
Co-Authored-By: Dmitry Tantsur 
Change-Id: I4253cffca3dbf558c875eed7e77711a31e9e3406


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714248

Title:
  Compute node HA for ironic doesn't work due to the name duplication of
  Resource Provider

Status in Ironic:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  Description
  ===
  In an environment where there are multiple compute nodes with ironic driver,
  when a compute node goes down, another compute node cannot take over ironic 
nodes.

  Steps to reproduce
  ==
  1. Start multiple compute nodes with ironic driver.
  2. Register one node to ironic.
  2. Stop a compute node which manages the ironic node.
  3. Create an instance.

  Expected result
  ===
  The instance is created.

  Actual result
  =
  The instance creation is failed.

  Environment
  ===
  1. Exact version of OpenStack you are running.
  openstack-nova-scheduler-15.0.6-2.el7.noarch
  openstack-nova-console-15.0.6-2.el7.noarch
  python2-novaclient-7.1.0-1.el7.noarch
  openstack-nova-common-15.0.6-2.el7.noarch
  openstack-nova-serialproxy-15.0.6-2.el7.noarch
  openstack-nova-placement-api-15.0.6-2.el7.noarch
  python-nova-15.0.6-2.el7.noarch
  openstack-nova-novncproxy-15.0.6-2.el7.noarch
  openstack-nova-api-15.0.6-2.el7.noarch
  openstack-nova-conductor-15.0.6-2.el7.noarch

  2. Which hypervisor did you use?
  ironic

  Details
  ===
  When a nova-compute goes down, another nova-compute will take over ironic 
nodes managed by the failed nova-compute by re-balancing a hash-ring. Then the 
active nova-compute tries creating a
  new resource provider with a new ComputeNode object UUID and the hypervisor 
name (ironic node UUID)[1][2][3]. This creation fails with a conflict(409) 
since there is a resource provider with the same name created by the failed 
nova-compute.

  When a new instance is requested, the scheduler gets only an old
  resource provider for the ironic node[4]. Then, the ironic node is not
  selected:

  WARNING nova.scheduler.filters.compute_filter [req-
  a37d68b5-7ab1-4254-8698-502304607a90 7b55e61a07304f9cab1544260dcd3e41
  e21242f450d948d7af2650ac9365ee36 - - -] (compute02, 8904aeeb-a35b-4ba3
  -848a-73269fdde4d3) ram: 4096MB disk: 849920MB io_ops: 0 instances: 0
  has not been heard from in a while

  [1] 
https://github.com/openstack/nova/blob/stable/ocata/nova/compute/resource_tracker.py#L464
  [2] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L630
  [3] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/client/report.py#L410
  [4] 
https://github.com/openstack/nova/blob/stable/ocata/nova/scheduler/filter_scheduler.py#L183

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1714248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team

[Yahoo-eng-team] [Bug 1736755] Re: unit tests error in FIP creation

2017-12-13 Thread Dongcan Ye
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736755

Title:
  unit tests error in FIP creation

Status in neutron:
  Invalid

Bug description:
  While debugging unit test for [1] using tox command "tox -e venv --
  python -m testtools.run
  
neutron.tests.unit.extensions.test_l3.L3AgentDbIntTestCase.test_l3_agent_routers_query_floatingips".

  I found that floatingip created with the params:
  (Pdb) data
  {'floatingip': {'tenant_id': '46f70361-ba71-4bd0-9769-3573fd227c4b', 
'port_id': u'3dca5c4e-dee5-4a9c-afbe-d77494c42223', 'floating_network_id': 
u'2bdc683e-5b0c-46ad-a85a-9fc138e5778f'}}

  But these params will error while using neutronclient:
  # neutron floatingip-create --port-id 5b129110-d6ba-4e0f-8d56-3fce7d052213 
public

  DEBUG: keystoneauth.session REQ: curl -g -i -X POST 
http://20.30.40.5:9696/v2.0/floatingips -H "User-Agent: python-neutronclient" 
-H "Content-Type: application/json" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}d98e0f7fa754f69fc26bd427b244a335f7f8d97a" -d 
'{"floatingip": {"floating_network_id": "30c2a624-7c53-46a2-a733-b196e7d72b40", 
"port_id": "5b129110-d6ba-4e0f-8d56-3fce7d052213"}}'
  DEBUG: keystoneauth.session RESP: [400] Content-Type: application/json 
Content-Length: 147 X-Openstack-Request-Id: 
req-9998bac0-3f87-4db3-98a3-9c98789d275b Date: Wed, 06 Dec 2017 15:02:58 GMT 
Connection: keep-alive 
  RESP BODY: {"NeutronError": {"message": "Invalid input for operation: IP 
allocation requires subnet_id or ip_address.", "type": "InvalidInput", 
"detail": ""}}

  Function floatingip_with_assoc in[2] create a FIP only with FIP
  network and private port, so I think there are lots of UTs need to
  amend.

  
  [1] https://review.openstack.org/#/c/521707/
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/extensions/test_l3.py#L484

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1736755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690790] Re: policy checks for detach_interface not working correctly

2017-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/527414
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=c999239fed8d2213a8913d1cc8fb294f7086b594
Submitter: Zuul
Branch:master

commit c999239fed8d2213a8913d1cc8fb294f7086b594
Author: Abdallah Banishamsa 
Date:   Tue Dec 12 09:25:05 2017 -0500

Prevent non-admin users from detaching interfaces

Remove the option to detach_interface from running instances for
non-admin users.

Change-Id: Id641bde457e8723ace0bc1e49aab2c46b2227485
Closes-bug: #1690790


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1690790

Title:
  policy checks for detach_interface not working correctly

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Hello,

  we want to prevent non-admin users to detach interfaces from running 
instances. So we changed the policy to 
  "compute:detach_interface": "role:admin",
  but still non-admin users see the Option and can use it.

  Am I missing some place to declare this option?

  Kind regards,
  Kenneth Cummings

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1690790/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737952] [NEW] 500 error if custom property key is greater than 255 characters

2017-12-13 Thread Abhishek Kekane
Public bug reported:

While creating the image if user passes property 'key' greater than 255
characters then it fails with 500 error. Ideally it should return 400
Bad Request to the user.

Steps to reproduce:
1. Create image
glance image-create --name mySignedImage --container-format bare --disk-format 
qcow2 --property 
"abc"="12434"
 --file ~/devstack/local.conf

Output:
500 Internal Server Error: The server has either erred or is incapable of 
performing the requested operation. (HTTP 500)

g-api logs:
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi [None req-61ff5b17-2af5-4cf1-80d5-496ae776da25 demo demo] 
Caught error: (pymysql.err.DataError) (1406, u"Data too long for column 'name' 
at row 1") [SQL: u'INSERT INTO image_properties (created_at, updated_at, 
deleted_at, deleted, image_id, name, value) VALUES (%(created_at)s, 
%(updated_at)s, %(deleted_at)s, %(deleted)s, %(image_id)s, %(name)s, 
%(value)s)'] [parameters: {'name': 
u'abcc
 ... (150 characters truncated) ... 
c',
 'deleted': 0, 'created_at': datetime.datetime(2017, 12, 13, 11, 2, 9, 508041), 
'updated_at': datetime.datetime(2017, 12, 13, 11, 2, 9, 508048), 'value': 
u'12434', 'image_id': 'e376fa83-0082-4125-a79a-60696a0e348d', 'deleted_at': 
None}]: DBDataError: (pymysql.err.DataError) (1406, u"Data too long for column 
'name' at row 1") [SQL: u'INSERT INTO image_properties (created_at, updated_at, 
deleted_at, deleted, image_id, name, value) VALUES (%(created_at)s, 
%(updated_at)s, %(deleted_at)s, %(deleted)s, %(image_id)s, %(name)s, 
%(value)s)'] [parameters: {'name': 
u'abcc
 ... (150 characters truncated) ... 
c',
 'deleted': 0, 'created_at': datetime.datetime(2017, 12, 13, 11, 2, 9, 508041), 
'updated_at': datetime.datetime(2017, 12, 13, 11, 2, 9, 508048), 'value': 
u'12434', 'image_id': 'e376fa83-0082-4125-a79a-60696a0e348d', 'deleted_at': 
None}]
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi Traceback (most recent call last):
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/wsgi.py", line 1222, 
in __call__
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi request, **action_args)
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/wsgi.py", line 1261, 
in dispatch
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi return method(*args, **kwargs)
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/common/utils.py", line 363, 
in wrapped
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi return func(self, req, *args, **kwargs)
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/api/v2/images.py", line 67, 
in create
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi image_repo.add(image)
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 94, 
in add
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi result = self.base.add(base_item)
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/notifier.py", line 514, in 
add
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi super(ImageRepoProxy, self).add(image)
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi   File "/opt/stack/glance/glance/domain/proxy.py", line 94, 
in add
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.common.wsgi result = self.base.add(base_item)
Dec 13 11:02:09 devstack devstack@g-api.service[20154]: ERROR 
glance.com

[Yahoo-eng-team] [Bug 1737924] [NEW] Server Groups data is not displayed in Instance Details page

2017-12-13 Thread Sudheer Kalla
Public bug reported:

In instance launch wizard there are options for selecting server groups
and adding scheduler hints.

How ever these details are not displayed in instance details page along
with other data.

** Affects: horizon
 Importance: Undecided
 Status: New

** Summary changed:

- Server group data is not displayed in Instance Details page
+ Server Groups data is not displayed in Instance Details page

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1737924

Title:
  Server Groups data is not displayed in Instance Details page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In instance launch wizard there are options for selecting server
  groups and adding scheduler hints.

  How ever these details are not displayed in instance details page
  along with other data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1737924/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737917] [NEW] Networking Option 2: Self-service networks in neutron

2017-12-13 Thread Aaasidncza
Public bug reported:

Hi all Cool Developer of Openstack,I followed the pike install doc and 
successful running openstack now,and In 
Neutron configuration section,Verify operation Networking Option 2: 
Self-service networks(I choosed Self-Servrice network to deploy on Centos7x64 
everything up to date)there we see:
The output should indicate four agents on the controller node and one agent on 
each compute node.

$ openstack network agent list

+--+++---+---+---+---+
| ID   | Agent Type | Host   | 
Availability Zone | Alive | State | Binary|
+--+++---+---+---+---+
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | None 
 | True  | UP| neutron-metadata-agent|
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | None 
 | True  | UP| neutron-linuxbridge-agent |
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1   | None 
 | True  | UP| neutron-linuxbridge-agent |
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent   | controller | nova 
 | True  | UP| neutron-l3-agent  |
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | nova 
 | True  | UP| neutron-dhcp-agent|
+--+++---+---+---+---+


and I run this command on controller node too after setup and configuration for 
openstack is over and then got this output:
[root@controller neutron]# openstack network agent list
+--+++--
-+---+---+---+
| ID   | Agent Type | Host   | Avail
ability Zone | Alive | State | Binary|
+--+++--
-+---+---+---+
| 010608cc-01cf-4143-97d7-df617aaf2ac1 | Linux bridge agent | compute1   | None
 | :-)   | UP| neutron-linuxbridge-agent |
| 09cd7c61-b874-44a2-afd1-691bedbb8a97 | Metadata agent | controller | None
 | :-)   | UP| neutron-metadata-agent|
| 865bcc2f-183f-4c3a-8b05-b6724beca5f0 | L3 agent   | controller | nova
 | :-)   | UP| neutron-l3-agent  |
| 960a7d2c-5345-4245-8494-3adf885a61ff | DHCP agent | controller | nova
 | :-)   | UP| neutron-dhcp-agent|
| c2bb1518-817a-4d6e-a159-7ac01858f874 | Linux bridge agent | controller | None
 | :-)   | UP| neutron-linuxbridge-agent |
+--+++--
-+---+---+---+


seems running normally just like what we got in offical docs.and ip netns on 
controller:
[root@controller neutron]# ip netns exec qrouter-89466cea-7d58-4a45-92da-e636c0
958358 ping -c 5 www.bing.com
PING cn-0001.cn-msedge.net (202.89.233.100) 56(84) bytes of data.
64 bytes from 202.89.233.100 (202.89.233.100): icmp_seq=1 ttl=115 time=31.6 ms
64 bytes from 202.89.233.100 (202.89.233.100): icmp_seq=2 ttl=115 time=31.5 ms
64 bytes from 202.89.233.100 (202.89.233.100): icmp_seq=3 ttl=115 time=31.4 ms
64 bytes from 202.89.233.100 (202.89.233.100): icmp_seq=4 ttl=115 time=32.4 ms
64 bytes from 202.89.233.100 (202.89.233.100): icmp_seq=5 ttl=115 time=31.8 ms

--- cn-0001.cn-msedge.net ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4003ms
rtt min/avg/max/mdev = 31.490/31.810/32.405/0.397 ms
[root@controller neutron]# ip netns
qrouter-89466cea-7d58-4a45-92da-e636c0958358 (id: 2)
qdhcp-fca97929-aed3-45dc-9f19-f7ead767fbc3 (id: 0)
qdhcp-c08c44ed-d71e-4acb-b1a5-3ebc4715a01d (id: 1)


and every virtual machine on openstack runs perfectly(self network or provider 
network,access to Internet),except one 
day I log into compute node,and found the linuxbridge.log seems too big(over 
300mb),quickly I use grep to filter out most INFO information and got so many 
ERROR log like this(also the only one type of error happens every fix period):

2017-12-13 17:23:16.030 1334 INFO neutron.agent.securitygroups_rpc [req-7fbf230a
-bf45-4817-a7e8-447005e1700a - - - - -] Security group member updated [u'626f761
f-7451-42cf-afbf-724da51190c0']
2017-12-13 17:23:16.172 1334 ERROR neutron.plugins.ml2.drivers.agent._common_age
nt [req-7fbf230a-bf45-4817-a7e8-447005e1700a - - - - -] Error occurred while rem
oving port tapa97e89f4-2d: RemoteError: Remote error: AgentNotFoundByTypeHost Ag
ent with agent_type=L3 agent and host=co

[Yahoo-eng-team] [Bug 1280299] Re: missing choice 'storage type' when creating a new volume during instance creation

2017-12-13 Thread Crazik
I would like to reopen this issue. 
I need to have a choice of volume type when creating new volume-based instances.


** Changed in: horizon
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1280299

Title:
  missing choice 'storage type' when creating a new volume during
  instance creation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There should be a choice for the storage type when choosing 'boot from
  image (create a new volume)' or 'boot from snapshot (creates a new
  volume)' as instance boot source in the launch instance dialog in the
  instances panel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1280299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737900] [NEW] Random volume type on create volume from image

2017-12-13 Thread Crazik
Public bug reported:

I want to create a new volume from image, starting from context menu on images 
list 
(url: horizon/project/images). 
There is a form, where I can specify name, size, AZ, and volume type. 
For all my tests, volume type is randomly choosen from existing two (in my 
setup).
The same actions called from CLI are correct.
There is no error in logs.


environment:
Ocata
openstack-dashboard 3:11.0.1-0ubuntu1~cloud0
python-glanceclient 1:2.6.0-0ubuntu1~cloud0
python-cinderclient 1:1.11.0-0ubuntu2~cloud0

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1737900

Title:
  Random volume type on create volume from image

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I want to create a new volume from image, starting from context menu on 
images list 
  (url: horizon/project/images). 
  There is a form, where I can specify name, size, AZ, and volume type. 
  For all my tests, volume type is randomly choosen from existing two (in my 
setup).
  The same actions called from CLI are correct.
  There is no error in logs.

  
  environment:
  Ocata
  openstack-dashboard 3:11.0.1-0ubuntu1~cloud0
  python-glanceclient 1:2.6.0-0ubuntu1~cloud0
  python-cinderclient 1:1.11.0-0ubuntu2~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1737900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1692472] Re: neutron should update the network after deleting the rbac policy for "access_as_external".

2017-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/512484
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e3ca20fb57c30e218a23dc6ea7098cb2234d2981
Submitter: Zuul
Branch:master

commit e3ca20fb57c30e218a23dc6ea7098cb2234d2981
Author: Dongcan Ye 
Date:   Tue Oct 17 13:14:36 2017 +0800

Update network external attribute for RBAC change

If a network's RBAC external attribute is deleted, we
should update the router:external attribute to False
if there is no other access_as_external rbac policy on the network.

Tempest API test patch: https://review.openstack.org/#/c/520255/

Change-Id: Ibdbe8a88581e54250259825bbf1c77485fd09f89
Closes-Bug: #1692472


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1692472

Title:
  neutron should update the network after deleting the rbac policy for
  "access_as_external".

Status in neutron:
  Fix Released

Bug description:
  when user creates a rbac policy with option “access_as_external” for a
  network using the command "neutron rbac-create --type network --action
  access_as_external TEST", the network is becoming external. After
  deleting the rbac policy, it is not getting updated in the network.
  The network still shows as external: true.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1692472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737894] [NEW] 'a bug for test(ignore it )'

2017-12-13 Thread qiu.jibao
Public bug reported:

it is for test, ignore it please

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1737894

Title:
  'a bug for test(ignore it )'

Status in OpenStack Identity (keystone):
  New

Bug description:
  it is for test, ignore it please

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1737894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737895] [NEW] 'a bug for test(ignore it )'

2017-12-13 Thread qiu.jibao
Public bug reported:

it is for test, ignore it please

** Affects: keystone
 Importance: Undecided
 Assignee: qiu.jibao (qiu.jibao)
 Status: Invalid

** Changed in: keystone
 Assignee: (unassigned) => qiu.jibao (qiu.jibao)

** Changed in: keystone
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1737895

Title:
  'a bug for test(ignore it )'

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  it is for test, ignore it please

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1737895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737892] [NEW] Fullstack test test_qos.TestBwLimitQoSOvs.test_bw_limit_qos_port_removed failing many times

2017-12-13 Thread Slawek Kaplonski
Public bug reported:

It looks that this test fails every time with same reason:

2017-12-13 05:29:35.846 | Captured traceback:
2017-12-13 05:29:35.848 | ~~~
2017-12-13 05:29:35.850 | Traceback (most recent call last):
2017-12-13 05:29:35.852 |   File "neutron/tests/base.py", line 132, in func
2017-12-13 05:29:35.854 | return f(self, *args, **kwargs)
2017-12-13 05:29:35.856 |   File "neutron/tests/fullstack/test_qos.py", 
line 237, in test_bw_limit_qos_port_removed
2017-12-13 05:29:35.858 | self._wait_for_bw_rule_removed(vm, 
self.direction)
2017-12-13 05:29:35.860 |   File "neutron/tests/fullstack/test_qos.py", 
line 121, in _wait_for_bw_rule_removed
2017-12-13 05:29:35.862 | self._wait_for_bw_rule_applied(vm, None, 
None, direction)
2017-12-13 05:29:35.864 |   File "neutron/tests/fullstack/test_qos.py", 
line 219, in _wait_for_bw_rule_applied
2017-12-13 05:29:35.866 | lambda: 
vm.bridge.get_ingress_bw_limit_for_port(
2017-12-13 05:29:35.868 |   File "neutron/common/utils.py", line 632, in 
wait_until_true
2017-12-13 05:35:14.907 | raise WaitTimeout("Timed out after %d 
seconds" % timeout)
2017-12-13 05:35:14.909 | neutron.common.utils.WaitTimeout: Timed out after 
60 seconds

Example fail: http://logs.openstack.org/73/519573/12/check/legacy-
neutron-dsvm-fullstack/f27cbf9/logs/devstack-gate-
post_test_hook.txt#_2017-12-13_05_29_35_842

It failed 16 times in 24h between 9:00 12.12.2017 and 9:00 13.12.2017:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22neutron.tests.fullstack.test_qos.TestBwLimitQoSOvs.test_bw_limit_qos_port_removed%5C%22%20AND%20message%3A%5C%22FAILED%5C%22

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: fullstack gate-failure qos

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1737892

Title:
  Fullstack test
  test_qos.TestBwLimitQoSOvs.test_bw_limit_qos_port_removed failing many
  times

Status in neutron:
  Confirmed

Bug description:
  It looks that this test fails every time with same reason:

  2017-12-13 05:29:35.846 | Captured traceback:
  2017-12-13 05:29:35.848 | ~~~
  2017-12-13 05:29:35.850 | Traceback (most recent call last):
  2017-12-13 05:29:35.852 |   File "neutron/tests/base.py", line 132, in 
func
  2017-12-13 05:29:35.854 | return f(self, *args, **kwargs)
  2017-12-13 05:29:35.856 |   File "neutron/tests/fullstack/test_qos.py", 
line 237, in test_bw_limit_qos_port_removed
  2017-12-13 05:29:35.858 | self._wait_for_bw_rule_removed(vm, 
self.direction)
  2017-12-13 05:29:35.860 |   File "neutron/tests/fullstack/test_qos.py", 
line 121, in _wait_for_bw_rule_removed
  2017-12-13 05:29:35.862 | self._wait_for_bw_rule_applied(vm, None, 
None, direction)
  2017-12-13 05:29:35.864 |   File "neutron/tests/fullstack/test_qos.py", 
line 219, in _wait_for_bw_rule_applied
  2017-12-13 05:29:35.866 | lambda: 
vm.bridge.get_ingress_bw_limit_for_port(
  2017-12-13 05:29:35.868 |   File "neutron/common/utils.py", line 632, in 
wait_until_true
  2017-12-13 05:35:14.907 | raise WaitTimeout("Timed out after %d 
seconds" % timeout)
  2017-12-13 05:35:14.909 | neutron.common.utils.WaitTimeout: Timed out 
after 60 seconds

  Example fail: http://logs.openstack.org/73/519573/12/check/legacy-
  neutron-dsvm-fullstack/f27cbf9/logs/devstack-gate-
  post_test_hook.txt#_2017-12-13_05_29_35_842

  It failed 16 times in 24h between 9:00 12.12.2017 and 9:00 13.12.2017:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22neutron.tests.fullstack.test_qos.TestBwLimitQoSOvs.test_bw_limit_qos_port_removed%5C%22%20AND%20message%3A%5C%22FAILED%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1737892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733852] Re: Incorrect ARP entries in new DVR routers for Octavia VRRP addresses

2017-12-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/524037
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=af73882a9db994b06d8df18d4d5abc05c7aecd32
Submitter: Zuul
Branch:master

commit af73882a9db994b06d8df18d4d5abc05c7aecd32
Author: Daniel Russell 
Date:   Wed Nov 29 15:27:06 2017 +1100

Prevent LBaaS VRRP ports from populating DVR router ARP table

Prevents the MAC address of the VIP address of an LBaaS or
LBaaSv2 instance from populating in the DVR router ARP table

Change-Id: If49aaa48a5e95ccd0a236db984d3984a6e44c87c
Closes-Bug: 1733852


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733852

Title:
  Incorrect ARP entries in new DVR routers for Octavia VRRP addresses

Status in neutron:
  Fix Released

Bug description:
  Hi,

  I am running Ocata Neutron with OVS DVR, l2_population is on, and
  Ocata Octavia is also installed.  Under a certain circumstance, I am
  getting incorrect ARP entries in the routers for the VRRP address of
  the loadbalancers created.

  Here is the ARP table for a router that preexisted a Load Balancer creation :
  [root@ ~]# ip netns exec qrouter-6b5fe9df-eab2-4147-b95f-419d0c620344 
ip  neigh
  10.2.2.11 dev qr-458b6819-4f lladdr fa:16:3e:3c:df:9c PERMANENT
  10.2.2.1 dev qr-458b6819-4f lladdr fa:16:3e:f0:45:c9 PERMANENT
  10.2.2.2 dev qr-458b6819-4f lladdr fa:16:3e:70:0e:8c PERMANENT
  [root@ ~]#

  After creating a loadbalancer, ports are created for the load balancer 
instance in the project network and the vrrp address (but as far as I 
understand, the vrrp port is just there to reserve the IP):
  [root@ /]# openstack port show 9bb862a7-fdb5-487e-94f5-4fac8b55d5d2
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| UP  
  |
  | allowed_address_pairs | ip_address='10.2.2.8', 
mac_address='fa:16:3e:78:82:cb'|
  | binding_host_id   |  
  |
  | binding_profile   | 
  |
  | binding_vif_details   | ovs_hybrid_plug='True', port_filter='True'  
  |
  | binding_vif_type  | ovs 
  |
  | binding_vnic_type | normal  
  |
  | created_at| 2017-11-22T10:35:11Z
  |
  | description   | 
  |
  | device_id | 3355a8e7-95fe-4f15-8233-3ffcbb935d5c
  |
  | device_owner  | compute:None
  |
  | dns_assignment| 
fqdn='amphora-8cc77a78-359e-4829-968b-2d026869d845.cloud..', hostname
  |
  |   | ='amphora-8cc77a78-359e-4829-968b-2d026869d845', 
ip_address='10.2.2.5'|
  | dns_name  | amphora-8cc77a78-359e-4829-968b-2d026869d845
  |
  | extra_dhcp_opts   | 
  |
  | fixed_ips | ip_address='10.2.2.5', 
subnet_id='0c8633c6-96a1-4c0e-a73f-212eddfd6172'   |
  | id| 9bb862a7-fdb5-487e-94f5-4fac8b55d5d2
  |
  | ip_address| None
  |
  | mac_address   | fa:16:3e:78:82:cb   
  |
  | name  | 
octavia-lb-vrrp-8cc77a78-359e-4829-968b-2d026869d845
  |
  | network_id| 8d365ce2-d909-410d-991c-7f503a65d67b
  |
  | option_name   | None
  |
  | option_value  | None
  |
  | port_secur