[Yahoo-eng-team] [Bug 1724686] Re: authentication code hangs when there are three or more admin keystone endpoints

2018-01-27 Thread Launchpad Bug Tracker
[Expired for OpenStack Identity (keystone) because there has been no
activity for 60 days.]

** Changed in: keystone
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1724686

Title:
  authentication code hangs when there are three or more admin keystone
  endpoints

Status in OpenStack Identity (keystone):
  Expired
Status in python-keystoneclient:
  Expired

Bug description:
  I'm running stable/pike devstack, and I was playing around with what
  happens when there are many endpoints in multiple regions, and I
  stumbled over a scenario where the keystone authentication code hangs.

  My original endpoint list looked like this:

  ubuntu@devstack:/opt/stack/devstack$ openstack endpoint list
  
+--+---+--+-+-+---+--+
  | ID   | Region| Service Name | Service Type  
  | Enabled | Interface | URL  |
  
+--+---+--+-+-+---+--+
  | 0a9979ebfdbf48ce91ccf4e2dd952c1a | RegionOne | kingbird | 
synchronization | True| internal  | http://127.0.0.1:8118/v1.0  
 |
  | 11d5507afe2a4eddb4f030695699114f | RegionOne | placement| placement 
  | True| public| http://128.224.186.226/placement |
  | 1e42cf139398405188755b7e00aecb4d | RegionOne | keystone | identity  
  | True| admin | http://128.224.186.226/identity  |
  | 2daf99edecae4afba88bb58233595481 | RegionOne | glance   | image 
  | True| public| http://128.224.186.226/image |
  | 2ece52e8bbb34d47b9bd5611f5959385 | RegionOne | kingbird | 
synchronization | True| admin | http://127.0.0.1:8118/v1.0  
 |
  | 4835a089666a4b03bd2f499457ade6c2 | RegionOne | kingbird | 
synchronization | True| public| http://127.0.0.1:8118/v1.0  
 |
  | 78e9fbc0a47642268eda3e3576920f37 | RegionOne | nova | compute   
  | True| public| http://128.224.186.226/compute/v2.1  |
  | 96a1e503dc0e4520a190b01f6a0cf79c | RegionOne | keystone | identity  
  | True| public| http://128.224.186.226/identity  |
  | a1887dbc8c5e4af5b4a6dc5ce224b8ff | RegionOne | cinderv2 | volumev2  
  | True| public| http://128.224.186.226/volume/v2/$(project_id)s  |
  | b7d5938141694a4c87adaed5105ea3ab | RegionOne | cinder   | volume
  | True| public| http://128.224.186.226/volume/v1/$(project_id)s  |
  | bb169382cbea4715964e4652acd48070 | RegionOne | nova_legacy  | 
compute_legacy  | True| public| 
http://128.224.186.226/compute/v2/$(project_id)s |
  | e01c8d8e08874d61b9411045a99d4860 | RegionOne | neutron  | network   
  | True| public| http://128.224.186.226:9696/ |
  | f94c96ed474249a29a6c0a1bb2b2e500 | RegionOne | cinderv3 | volumev3  
  | True| public| http://128.224.186.226/volume/v3/$(project_id)s  |
  
+--+---+--+-+-+---+--+

  I was able to successfully run the following python code:

  from keystoneauth1 import loading
  from keystoneauth1 import loading
  from keystoneauth1 import session
  from keystoneclient.v3 import client
  loader = loading.get_plugin_loader("password")
  auth = 
loader.load_from_options(username='admin',password='secret',project_name='admin',auth_url='http://128.224.186.226/identity')
  sess = session.Session(auth=auth)
  keystone = client.Client(session=sess)
  keystone.services.list()

  I then duplicated all of the endpoints in a new region "region2", and
  was able to run the python code.  When I duplicated all the endpoints
  again in a new region "region3" (for a total of 39 endpoints) the
  python code hung at the final line.

  Removing all the "region3" endpoints allowed the python code to work
  again.

  During all of this the command "openstack endpoint list" worked fine.

  Further testing seems to indicate that it is the third "admin"
  keystone endpoint that is causing the problem.  I can add multiple
  "public" keystone endpoints, but three or more "admin" keystone
  endpoints cause the python code to hang.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1724686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1724686] Re: authentication code hangs when there are three or more admin keystone endpoints

2018-01-27 Thread Launchpad Bug Tracker
[Expired for python-keystoneclient because there has been no activity
for 60 days.]

** Changed in: python-keystoneclient
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1724686

Title:
  authentication code hangs when there are three or more admin keystone
  endpoints

Status in OpenStack Identity (keystone):
  Expired
Status in python-keystoneclient:
  Expired

Bug description:
  I'm running stable/pike devstack, and I was playing around with what
  happens when there are many endpoints in multiple regions, and I
  stumbled over a scenario where the keystone authentication code hangs.

  My original endpoint list looked like this:

  ubuntu@devstack:/opt/stack/devstack$ openstack endpoint list
  
+--+---+--+-+-+---+--+
  | ID   | Region| Service Name | Service Type  
  | Enabled | Interface | URL  |
  
+--+---+--+-+-+---+--+
  | 0a9979ebfdbf48ce91ccf4e2dd952c1a | RegionOne | kingbird | 
synchronization | True| internal  | http://127.0.0.1:8118/v1.0  
 |
  | 11d5507afe2a4eddb4f030695699114f | RegionOne | placement| placement 
  | True| public| http://128.224.186.226/placement |
  | 1e42cf139398405188755b7e00aecb4d | RegionOne | keystone | identity  
  | True| admin | http://128.224.186.226/identity  |
  | 2daf99edecae4afba88bb58233595481 | RegionOne | glance   | image 
  | True| public| http://128.224.186.226/image |
  | 2ece52e8bbb34d47b9bd5611f5959385 | RegionOne | kingbird | 
synchronization | True| admin | http://127.0.0.1:8118/v1.0  
 |
  | 4835a089666a4b03bd2f499457ade6c2 | RegionOne | kingbird | 
synchronization | True| public| http://127.0.0.1:8118/v1.0  
 |
  | 78e9fbc0a47642268eda3e3576920f37 | RegionOne | nova | compute   
  | True| public| http://128.224.186.226/compute/v2.1  |
  | 96a1e503dc0e4520a190b01f6a0cf79c | RegionOne | keystone | identity  
  | True| public| http://128.224.186.226/identity  |
  | a1887dbc8c5e4af5b4a6dc5ce224b8ff | RegionOne | cinderv2 | volumev2  
  | True| public| http://128.224.186.226/volume/v2/$(project_id)s  |
  | b7d5938141694a4c87adaed5105ea3ab | RegionOne | cinder   | volume
  | True| public| http://128.224.186.226/volume/v1/$(project_id)s  |
  | bb169382cbea4715964e4652acd48070 | RegionOne | nova_legacy  | 
compute_legacy  | True| public| 
http://128.224.186.226/compute/v2/$(project_id)s |
  | e01c8d8e08874d61b9411045a99d4860 | RegionOne | neutron  | network   
  | True| public| http://128.224.186.226:9696/ |
  | f94c96ed474249a29a6c0a1bb2b2e500 | RegionOne | cinderv3 | volumev3  
  | True| public| http://128.224.186.226/volume/v3/$(project_id)s  |
  
+--+---+--+-+-+---+--+

  I was able to successfully run the following python code:

  from keystoneauth1 import loading
  from keystoneauth1 import loading
  from keystoneauth1 import session
  from keystoneclient.v3 import client
  loader = loading.get_plugin_loader("password")
  auth = 
loader.load_from_options(username='admin',password='secret',project_name='admin',auth_url='http://128.224.186.226/identity')
  sess = session.Session(auth=auth)
  keystone = client.Client(session=sess)
  keystone.services.list()

  I then duplicated all of the endpoints in a new region "region2", and
  was able to run the python code.  When I duplicated all the endpoints
  again in a new region "region3" (for a total of 39 endpoints) the
  python code hung at the final line.

  Removing all the "region3" endpoints allowed the python code to work
  again.

  During all of this the command "openstack endpoint list" worked fine.

  Further testing seems to indicate that it is the third "admin"
  keystone endpoint that is causing the problem.  I can add multiple
  "public" keystone endpoints, but three or more "admin" keystone
  endpoints cause the python code to hang.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1724686/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1744786] Re: SchedulerReportClient.put with empty (not None) payload errors 415

2018-01-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/536545
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5a4872ed306ad33a2bceb50846a4b38ad3b95c73
Submitter: Zuul
Branch:master

commit 5a4872ed306ad33a2bceb50846a4b38ad3b95c73
Author: Eric Fried 
Date:   Mon Jan 22 13:49:52 2018 -0600

Report Client: PUT empty (not None) JSON data

Previously if a False-ish payload was sent to SchedulerReportClient.put,
we wouldn't send it through in the API call at all.  But False-ish
payloads may be legitimate: e.g. there is currently no DELETE API for
/resource_providers/{u}/aggregates so this is how you would remove all
aggregate associations for a provider.

In any case, placement's PUT API refuses to accept a request that
doesn't have Content-Type: application/json, which is set automatically
by Session if json is not None.

With this change set, we send payloads through to PUT unless they're
actually None.

Change-Id: I69d2b16d515590907ca0e0dc4da77dcf8e539976
Closes-Bug: #1744786


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1744786

Title:
  SchedulerReportClient.put with empty (not None) payload errors 415

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  
https://github.com/openstack/nova/blob/f0d830d56d20c7f34372cd3c68d13a94bdf645a6/nova/scheduler/client/report.py#L295-L302

   295   def put(self, url, data, version=None):
   296   # NOTE(sdague): using json= instead of data= sets the
   297   # media type to application/json for us. Placement API is
   298   # more sensitive to this than other APIs in the OpenStack
   299   # ecosystem.
   300   kwargs = {'microversion': version}
   301   if data:
   302   kwargs['json'] = data

  On line 301, if data is a False value other than None, we won't set
  the json kwarg, so Session won't set the content type to
  application/json, and we'll run afoul of:

  
   
415 Unsupported Media Type
   
   
415 Unsupported Media Type
The request media type None is not supported by this server.
  
  The media type None is not supported, use application/json
   
  

  A normal "workaround" - which is being used for e.g. inventories - is
  for the caller to check for "empty" and hit the DELETE API instead.

  But we don't have a DELETE API for resource provider aggregates
  (/resource_providers/{u}/aggregates).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1744786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1744824] Re: functional tests broken under py27

2018-01-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/537951
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=6f63d28d9b8727409fd982d5920846a942b6d43e
Submitter: Zuul
Branch:master

commit 6f63d28d9b8727409fd982d5920846a942b6d43e
Author: Erno Kuvaja 
Date:   Thu Jan 25 16:16:22 2018 +

Fix py27 eventlet issue <0.22.0

Closes-Bug: #1744824

Change-Id: Ib9f8a52136e25d1cb609d465ca5d859523d9acc6


** Changed in: glance
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1744824

Title:
  functional tests broken under py27

Status in Glance:
  Fix Released

Bug description:
  Over the weekend, the py27 tests began failing.  To reproduce, you
  need to use an upgraded Ubuntu.  It appears to be a distro package
  issue (though it's not clear ATM what package).

  The failing tests are functional tests, the unit tests pass OK.

  The py35 tests all pass (both unit and functional).

  In the meantime, the requirements team has dropped the glance py27
  tests from the requirements gate:
  https://review.openstack.org/#/c/536082/

  We should fix soon to get our tests back into the gate to prevent
  other bad stuff from happening to glance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1744824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602081] Re: Use oslo.context's policy dict

2018-01-27 Thread Adam Young
Fixed in Keystone  by f71a78db86632dccb391782e62da69a4627c7cad
https://review.openstack.org/#/c/523650/

** Changed in: keystone
 Assignee: (unassigned) => Adam Young (ayoung)

** Changed in: keystone
   Status: Triaged => Fix Released

** Changed in: keystone
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1602081

Title:
  Use oslo.context's policy dict

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is a cross project goal to standardize the values available to
  policy writers and to improve the basic oslo.context object. It is
  part of the follow up work to bug #1577996 and bug #968696.

  There has been an ongoing problem for how we define the 'admin' role.
  Because tokens are project scoped having the 'admin' role on any
  project granted you the 'admin' role on all of OpenStack. As a
  solution to this keystone defined an is_admin_project field so that
  keystone defines a single project that your token must be scoped to to
  perform admin operations. This has been implemented.

  The next phase of this is to make all the projects understand the X
  -Is-Admin-Project header from keystonemiddleware and pass it to
  oslo_policy. However this pattern of keystone changes something and
  then goes to every project to fix it has been repeated a number of
  times now and we would like to make it much more automatic.

  Ongoing work has enhanced the base oslo.context object to include both
  the load_from_environ and to_policy_values methods. The
  load_from_environ classmethod takes an environment dict with all the
  standard auth_token and oslo middleware headers and loads them into
  their standard place on the context object.

  The to_policy_values() then creates a standard credentials dictionary
  with all the information that should be required to enforce policy
  from the context. The combination of these two methods means in future
  when authentication information needs to be passed to policy it can be
  handled entirely by oslo.context and does not require changes in each
  individual service.

  Note that in future a similar pattern will hopefully be employed to
  simplify passing authentication information over RPC to solve the
  timeout issues. This is a prerequisite for that work.

  There are a few common problems in services that are required to make
  this work:

  1. Most service context.__init__ functions take and discard **kwargs.
  This is so if the context.from_dict receives arguments it doesn't know
  how to handle (possibly because new things have been added to the base
  to_dict) it ignores them. Unfortunately to make the load_from_environ
  method work we need to pass parameters to __init__ that are handled by
  the base class.

  To make this work we simply have to do a better job of using
  from_dict. Instead of passing everything to __init__ and ignoring what
  we don't know we have from_dict extract only the parameters that
  context knows how to use and call __init__ with those.

  2. The parameters passed to the base context.__init__ are old.
  Typically they are user and tenant where most services expect user_id
  and project_id. There is ongoing work to improve this in oslo.context
  but for now we have to ensure that the subclass correctly sets and
  uses the right variable names.

  3. Some services provide additional information to the policy
  enforcement method. To continue to make this function we will simply
  override the to_policy_values method in the subclasses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1602081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742827] Re: nova-scheduler reports dead compute nodes but nova-compute is enabled and up

2018-01-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/533371
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c98ac6adc561d70d34c724703a437b8435e6ddfa
Submitter: Zuul
Branch:master

commit c98ac6adc561d70d34c724703a437b8435e6ddfa
Author: melanie witt 
Date:   Sat Jan 13 21:49:54 2018 +

Stop globally caching host states in scheduler HostManager

Currently, in the scheduler HostManager, we cache host states in
a map global to all requests. This used to be okay because we were
always querying the entire compute node list for every request to
pass on to filtering. So we cached the host states globally and
updated them per request and removed "dead nodes" from the cache
(compute nodes still in the cache that were not returned from
ComputeNodeList.get_all).

As of Ocata, we started filtering our ComputeNodeList query based on
an answer from placement about which resource providers could satisfy
the request, instead of querying the entire compute node list every
time. This is much more efficient (don't consider compute nodes that
can't possibly fulfill the request) BUT it doesn't play well with the
global host state cache. We started seeing "Removing dead compute node"
messages in the logs, signaling removal of compute nodes from the
global cache when compute nodes were actually available.

If request A comes in and all compute nodes can satisfy its request,
then request B arrives concurrently and no compute nodes can satisfy
its request, the request B request will remove all the compute nodes
from the global host state cache and then request A will get "no valid
hosts" at the filtering stage because get_host_states_by_uuids returns
a generator that hands out hosts from the global host state cache.

This removes the global host state cache from the scheduler HostManager
and instead generates a fresh host state map per request and uses that
to return hosts from the generator. Because we're filtering the
ComputeNodeList based on a placement query per request, each request
can have a completely different set of compute nodes that can fulfill
it, so we're not gaining much by caching host states anyway.

Co-Authored-By: Dan Smith 

Closes-Bug: #1742827
Related-Bug: #1739323

Change-Id: I40c17ed88f50ecbdedc4daf368fff10e90e7be11


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742827

Title:
  nova-scheduler reports dead compute nodes but nova-compute is enabled
  and up

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  (originally reported by David Manchado in
  https://bugzilla.redhat.com/show_bug.cgi?id=1533196 )

  Description of problem:
  We are seeing that nova scheduler is removing compute nodes because it 
considers them as dead but openstack compute service list reports nova-compute 
to be up an running.
  We can see in nova-scheduler entries with the following pattern:
  - Removing dead compute node XXX from scheduler
  - Filter ComputeFilter returned 0 hosts
  - Filtering removed all hosts for the request with instance ID 
'11feeba9-f46c-416d-a97e-7c0c9d565b5a'. Filter results: 
['AggregateInstanceExtraSpecsFilter: (start: 19, end: 2)', 
'AggregateCoreFilter: (start: 2, end: 2)', 'AggregateDiskFilter: (start: 2, 
end: 2)', 'AggregateRamFilter: (start: 2, end: 2)', 'RetryFilter: (start: 2, 
end: 2)', 'AvailabilityZoneFilter: (start: 2, end: 2)', 'ComputeFilter: (start: 
2, end: 0)']

  Version-Release number of selected component (if applicable):
  Ocata

  How reproducible:
  N/A

  Actual results:
  Instances are not being spawned reporting 'no valid host found' because of 

  Additional info:
  This has been happening for a week.
  We did an upgrade from Newton three weeks ago.
  We have also done a minor update and the issue still persists.

  Nova related RPMs
  openstack-nova-scheduler-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  python2-novaclient-7.1.2-1.el7.noarch
  openstack-nova-novncproxy-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-cert-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-console-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-conductor-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-common-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-compute-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  openstack-nova-placement-api-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  puppet-nova-10.4.2-0.2018010220.f4bc1f0.el7.centos.noarch
  openstack-nova-api-15.1.1-0.20180103153502.ff2231f.el7.centos.noarch
  

[Yahoo-eng-team] [Bug 1742962] Re: nova functional test does not triggered on notification sample only changes

2018-01-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/533210
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d4377c2d537ea998ed96cc8d7071877eefab994f
Submitter: Zuul
Branch:master

commit d4377c2d537ea998ed96cc8d7071877eefab994f
Author: Balazs Gibizer 
Date:   Fri Jan 12 16:23:00 2018 +0100

Make sure that functional test triggered on sample changes

To be able to define different irrelevant-files for the functional jobs
than the ones defined in openstack-zuul-jobs we need to copy the jobs to
the nova tree and modify the fields in tree.

Technically we could factor out the irrelevant-files regexp list from
functional and functional-py35 jobs as they are the same today. However
in the future when they diverge we cannot simply override the
irrelevant-files in one of the jobs. Therefore this patch does not
introduce a common base job for functinal and functional-py35 jobs
to discurage trying to override.

The openstack-tox-functional and fuctional-py35 are removed from the
nova part of the project-config in
I56d44f8dff41dbf3b2ff2382fa39b364f55f9a44

Closes-Bug: #1742962
Change-Id: Ia684786d1622da7af31aa4479fc883a7c65848ff


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742962

Title:
  nova functional test does not triggered on notification sample only
  changes

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  In Progress
Status in OpenStack Compute (nova) pike series:
  In Progress

Bug description:
  As it is discovered during
  https://bugs.launchpad.net/nova/+bug/1742935 the openstack-tox-
  functional job does not trigger on commits only changing the
  notification sample files. But those files are used during the
  functional test so it can break it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1742962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736759] Re: Glance images can contain no data

2018-01-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/526329
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c10a614e92d15280f05574d82bbede6df6aaeec6
Submitter: Zuul
Branch:master

commit c10a614e92d15280f05574d82bbede6df6aaeec6
Author: Stephen Finucane 
Date:   Wed Dec 6 17:30:49 2017 +

Handle images with no data

There isn't really much we can do with these images, which glance tells
us are possible [1]. Simply raise an exception.

[1] 
https://docs.openstack.org/python-glanceclient/latest/reference/api/glanceclient.v2.images.html

Change-Id: I5f81393a5bb41e6a674369afb899d8a41bb2c3b4
Closes-Bug: #1736759


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1736759

Title:
  Glance images can contain no data

Status in OpenStack Compute (nova):
  Fix Released
Status in Glance Client:
  Fix Released

Bug description:
  Due to another bug [1], glance was returning None from
  'glanceclient.v2.images.Controller.data'. However, the glance
  documentation states that this is a valid return value. We should
  handle this. Logs below.

  [1] https://bugzilla.redhat.com/show_bug.cgi?id=1476448
  [2] 
https://docs.openstack.org/python-glanceclient/latest/reference/api/glanceclient.v2.images.html#glanceclient.v2.images.Controller.data

  ---

  2017-08-15 17:34:01.677 1 ERROR nova.image.glance 
[req-70546b57-a282-4552-8b9e-65be1871825a bd800a91d263411393899aff269084a0 
aaed41f2e25f494c9fadd01c340f25c8 - default default] Error writing to 
/var/lib/nova/instances/_base/cae3a4306eeb5643cb6caffbe1e3050645f8aee2.part: 
'NoneType' object is not iterable
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager 
[req-70546b57-a282-4552-8b9e-65be1871825a bd800a91d263411393899aff269084a0 
aaed41f2e25f494c9fadd01c340f25c8 - default default] [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] Instance failed to spawn: TypeError: 
'NoneType' object is not iterable
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] Traceback (most recent call last):
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2125, in 
_build_resources
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] yield resources
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1940, in 
_build_and_run_instance
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] block_device_info=block_device_info)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2793, in 
spawn
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] block_device_info=block_device_info)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3231, in 
_create_image
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] fallback_from_host)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3322, in 
_create_and_inject_local_root
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] instance, size, fallback_from_host)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6968, in 
_try_fetch_image_cache
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] size=size)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 241, 
in cache
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] *args, **kwargs)
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line 595, 
in create_image
  2017-08-15 17:34:01.679 1 ERROR nova.compute.manager [instance: 
c3fc31f1-28ab-47bf-a08b-65cfc6ab2ce0] 

[Yahoo-eng-team] [Bug 1707886] Re: Shortcoming in configure the Apache HTTP server on Ubuntu OS

2018-01-27 Thread Colleen Murphy
I'm going to mark this as invalid. The documentation for how to
configure the HTTPD Apache vhost file is here:

https://docs.openstack.org/developer/keystone/install/keystone-install-obs.html#configure-the-apache-http-server
https://docs.openstack.org/developer/keystone/install/keystone-install-rdo.html#configure-the-apache-http-server

On ubuntu, the package automatically creates an enabled vhost file for
you.

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1707886

Title:
   Shortcoming in configure the Apache HTTP server on Ubuntu OS

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: this doc does not mention the step 
for how to configure the wsgi-keystone.conf in Apache. Without this 
configuration file, the keystone  service can not work.
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output.

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 12.0.0.0b4.dev4 on 2017-07-31 20:29
  SHA: 6d3f29f016f21b760ee778b7519de4497a4fdc56
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-ubuntu.rst
  URL: 
https://docs.openstack.org/keystone/latest/install/keystone-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1707886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716792] Re: Install and configure in keystone, Pike: nav button wrong

2018-01-27 Thread Colleen Murphy
** Changed in: keystone
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1716792

Title:
  Install and configure in keystone, Pike: nav button wrong

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) pike series:
  Fix Committed

Bug description:
  
  This bug tracker is for errors with the documentation, use the following as a 
template and remove or add fields as you see fit. Convert [ ] into [x] to check 
boxes:

  On page https://docs.openstack.org/keystone/pike/install/index-
  rdo.html,

  - [x] This doc is inaccurate in this way: "Forward" button goes to Verify 
section, but should go to "Create a domain, projects, users, and roles" 
(https://docs.openstack.org/keystone/pike/install/keystone-users.html)
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01
  SHA: 5a9aeefff06678d790d167b6dac752677f02edf9
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-rdo.rst
  URL: 
https://docs.openstack.org/keystone/pike/install/keystone-install-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1716792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716899] Re: Install and configure in keystone

2018-01-27 Thread Colleen Murphy
** Changed in: keystone
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1716899

Title:
  Install and configure in keystone

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The Next link on the bottom of the page at
  "https://docs.openstack.org/keystone/pike/install/keystone-install-
  ubuntu.html#finalize-the-installation" point to Verify operation and
  not to "Create a domain, projects, users, and roles".



  
  This bug tracker is for errors with the documentation, use the following as a 
template and remove or add fields as you see fit. Convert [ ] into [x] to check 
boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01
  SHA: 5a9aeefff06678d790d167b6dac752677f02edf9
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-ubuntu.rst
  URL: 
https://docs.openstack.org/keystone/pike/install/keystone-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1716899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1721402] Re: stable/ocata requirements mismatch (pika and iso8601)

2018-01-27 Thread Colleen Murphy
Setting to won't fix for keystone as well. You can install with -c
requirements/upper-constraints.txt to ensure dependencies are
constrained to known working versions.

** Changed in: keystone
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1721402

Title:
  stable/ocata requirements mismatch (pika and iso8601)

Status in OpenStack Identity (keystone):
  Won't Fix
Status in oslo.utils:
  Won't Fix

Bug description:
  When installing keystone from the GitHub 
(https://github.com/openstack/keystone/tree/stable/ocata),
  there are 2 packages that cause issues with proper functionality.

  The first is pika.  When starting the service it says that pika must
  be >0.9.0 but <0.11.0, however, the requirements.txt file allows for
  0.11.0 to be installed.

  The second is iso8601.  The service will stand up just fine, but when 
attempting to log in, the service will fail to authenticate due to the 
inability for oslo_utils timeparser to be able to parse a time in the following 
format:
  2010-01-01T12:00:00UTC+01:00

  Further investigation shows that version 0.1.12 broke this change
  (https://bitbucket.org/micktwomey/pyiso8601/).  Downgrading iso8601 to
  0.1.11 resolves the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1721402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675822] Re: Allow policy actions in code to be importable for RBAC testing

2018-01-27 Thread Colleen Murphy
** Changed in: keystone
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1675822

Title:
  Allow policy actions in code to be importable for RBAC testing

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Now that Keystone is defining all of its policy actions in code, it is
  no longer possible to read the keystone policy.json in order to
  retrieve an exhaustive list of all the Keystone policy actions,
  necessary for RBAC testing by Patrole.

  Currently, Nova has its policy actions in code [0] and allows them to
  be imported via setup.cfg [1].

  Keystone can do the same thing as Nova by adding

  oslo.policy.policies =
  keystone = keystone.common.policies:list_rules

  to its setup.cfg.

  Moreover, oslo.policy currently uses the "oslo.policy.policies"
  extension by default [2] in order to generate a sample policy file.

  This bug fix, therefore, solves both issues.

  [0] https://github.com/openstack/nova/blob/master/nova/policies/__init__.py
  [1] https://github.com/openstack/nova/blob/master/setup.cfg
  [2] 
https://github.com/openstack/oslo.policy/blob/master/oslo_policy/generator.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1675822/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742421] Re: Cells Layout (v2) in nova doc misleading about upcalls

2018-01-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/532491
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=30786b892e45602d76cea63ca4ae1172fb2e063e
Submitter: Zuul
Branch:master

commit 30786b892e45602d76cea63ca4ae1172fb2e063e
Author: Liam Young 
Date:   Wed Jan 10 11:06:37 2018 +

Add exception to no-upcall note of cells doc

The cells v2 layout documentation clearly states that there are no
upcalls from cells back to the central API services. This mislead
me for sometime as I could not fathom how a compute node in a cell
was supposed to report its resource info.

It turns out nova looks up the placement service in the keystone
catalogue and contacts it directly which to my mind is an upcall. I
wonder if the author of the not felt that the placement service is
not really part of nova?

Change-Id: If14be8b182f0af4e4e6641046fec638c07e26546
Closes-Bug: #1742421


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1742421

Title:
  Cells Layout (v2) in nova doc misleading about upcalls

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  
  - [X] This doc is inaccurate in this way: Documentation suggests nova v2 
cells do not make 'upcalls' but they do when talking to the placement api.
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  
  It is important to note that services in the lower cell boxes
  only have the ability to call back to the placement API and no other
  API-layer services via RPC, nor do they have access to the API database
  for global visibility of resources across the cloud. This is intentional
  and provides security and failure domain isolation benefits, but also has 
  impacts on somethings that would otherwise require this any-to-any 
  communication style. Check the release notes for the version of Nova you 
  are using for the most up-to-date information about any caveats that may be
  present due to this limitation.

  
  ---
  Release: 17.0.0.0b3.dev323 on 2018-01-09 21:52
  SHA: 90a92d33edaea2b7411a5fd528f3159a486e1fd0
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/user/cellsv2-layout.rst
  URL: https://docs.openstack.org/nova/latest/user/cellsv2-layout.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1742421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734427] Re: 'all_tenants' 'all_projects' query param is not only integer as mentioned in api-ref

2018-01-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/522918
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=6a75cb2ef9b423b03c0be7ddc74dc1a34cd5a4e3
Submitter: Zuul
Branch:master

commit 6a75cb2ef9b423b03c0be7ddc74dc1a34cd5a4e3
Author: ghanshyam 
Date:   Sat Nov 25 11:34:24 2017 +0300

Fix 'all_tenants' & 'all_projects' type in api-ref

'all_tenants' and 'all_projects' are query param to
list the resources for all tenants/projects.

Checking of this query param in code is different in different APIs.
- GET /servers and /servers/detail API checks the value of 'all_tenants'
  strictly as boolean if there is one present.
- other APIs just checks the presence of it in req,
  like GET /os-server-groups, /os-fping

api-ref mentioned this param types as integer, boolean or string.

This commit make api-ref consistent to have type of this query param
as string.

Change-Id: I5297e6baa1e3d06adfc9d29d2bc56124119b9c8c
Closes-Bug: #1734427


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1734427

Title:
  'all_tenants'  'all_projects'  query param is not only integer as
  mentioned in api-ref

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  'all_tenants'  and 'all_projects'  are query param to list the
  resources for all tenants/projects.

  Checking of this query param in code is different in different APIs.
  -  GET /servers API checks the value of 'all_tenants' as bool[1]. 
  -  other APIs just checks the present of it in req, like GET 
/os-server-groups, /os-fping

  api-ref mentioned this param types as integer, boolean or string.

  It is good to mention this query param type consistently to avoid
  confusion for users.

  ..1
  
https://github.com/openstack/nova/blob/e9104dbaef9bbccc6b19811125d439fdf9558428/nova/api/openstack/compute/servers.py#L265

  ..2 
  
https://github.com/openstack/nova/blob/e9104dbaef9bbccc6b19811125d439fdf9558428/nova/api/openstack/compute/server_groups.py#L138

  
https://github.com/openstack/nova/blob/e9104dbaef9bbccc6b19811125d439fdf9558428/nova/api/openstack/compute/fping.py#L75

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1734427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1742401] Re: Fullstack tests neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork fails often

2018-01-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/536367
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=725df3e0382e048391fac109ea57920683eaf4d0
Submitter: Zuul
Branch:master

commit 725df3e0382e048391fac109ea57920683eaf4d0
Author: Sławek Kapłoński 
Date:   Mon Jan 22 14:01:30 2018 +0100

Fix race condition with enabling SG on many ports at once

When there are many calls to enable security groups on ports there
can be sometimes race condition between refresh recource_cache
with data get by "pull" call to neutron server and data received
with "push" rpc message from neutron server.
In such case when "push" message comes with information about
updated port (with enabled port_security), in local cache this port
is already updated so local AFTER_UPDATE call is not called for
such port and its rules in firewall are not updated.

It happend quite often in fullstack security groups test because
there are 4 ports created in this test and all 4 are updated to
apply SG to it one by one.
And here's what happen then in details:
1. port 1 was updated in neutron-server so it sends push notification
   to L2 agent to update security groups,
2. port 1 info was saved in resource cache on L2 agent's side and agent
   started to configure security groups for this port,
3. as one of steps L2 agent called
   SecurityGroupServerAPIShim._select_ips_for_remote_group() method;
   In that method RemoteResourceCache.get_resources() is called and this
   method asks neutron-server for details about ports from given
   security_group,
4. in the meantime neutron-server got port update call for second port
   (with same security group) so it sends to L2 agent informations about 2
   ports (as a reply to request sent from L2 agent in step 3),
5. resource cache updates informations about two ports in local cache,
   returns its data to
   SecurityGroupServerAPIShim._select_ips_for_remote_group() and all
   looks fine,
6. but now L2 agent receives push notification with info that port 2 is
   updated (changed security groups), so it checks info about this port
   in local cache,
7. in local cache info about port 2 is already WITH updated security
   group so RemoteResourceCache doesn't trigger local notification about
   port AFTER UPDATE and L2 agent doesn't know that security groups for this
   port should be changed

This patch fixes it by changing way how items are updated in
the resource_cache.
For now it is done with record_resource_update() method instead of
writing new values directly to resource_cache._type_cache dict.
Due to that if resource will be updated during "pull" call to neutron
server, local AFTER_UPDATE will still be triggered for such resource.

Change-Id: I5a62cc5731c5ba571506a3aa26303a1b0290d37b
Closes-Bug: #1742401


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1742401

Title:
  Fullstack tests
  neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork
  fails often

Status in neutron:
  Fix Released

Bug description:
  Fullstack tests from group
  neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork
  are often failing in gate with error like:

  ft1.1: 
neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork.test_securitygroup(ovs-hybrid)_StringException:
 Traceback (most recent call last):
File "neutron/tests/base.py", line 132, in func
  return f(self, *args, **kwargs)
File "neutron/tests/fullstack/test_securitygroup.py", line 193, in 
test_securitygroup
  net_helpers.assert_no_ping(vms[0].namespace, vms[1].ip)
File "neutron/tests/common/net_helpers.py", line 155, in assert_no_ping
  {'ns': src_namespace, 'destination': dst_ip})
File "neutron/tests/tools.py", line 144, in fail
  raise unittest2.TestCase.failureException(msg)
  AssertionError: destination ip 20.0.0.9 is replying to ping from namespace 
test-dbbb4045-363f-44cb-825b-17090f28df11, but it shouldn't

  Example gate logs: http://logs.openstack.org/43/529143/3/check
  /neutron-fullstack/d031a6b/logs/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1742401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1744447] Re: Filtering Port OVO based on security groups don't work

2018-01-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/536342
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=e7c0ec17df83da3c7ba3f453ad8935320c471eb8
Submitter: Zuul
Branch:master

commit e7c0ec17df83da3c7ba3f453ad8935320c471eb8
Author: Sławek Kapłoński 
Date:   Mon Jan 22 10:38:59 2018 +0100

Fix Port OVO filtering based on security groups

Filtering of port OVO based on ids of security groups which
are used by ports is now available.

Closes-Bug: 177

Change-Id: Ie5a3effe668db119d40728be5357f0851bdcebbe


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/177

Title:
  Filtering Port OVO based on security groups don't work

Status in neutron:
  Fix Released

Bug description:
  Filtering Port OVO objects based on security groups which ports are using 
don't work properly. There was patch https://review.openstack.org/#/c/475283/ 
which should provide such feature but it looks that it doesn't work.
  Result of such filtering can be checked in results of UT for patch: 
https://review.openstack.org/#/c/535988/1 - related UT is failing now: 
http://logs.openstack.org/88/535988/1/check/openstack-tox-py27/6606f96/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp