[Yahoo-eng-team] [Bug 1697243] Re: ovs bridge flow table is dropped by unkown cause

2019-01-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/587244
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=379a9faf6206039903555ce7e3fc4221e5f06a7a
Submitter: Zuul
Branch:master

commit 379a9faf6206039903555ce7e3fc4221e5f06a7a
Author: Arjun Baindur 
Date:   Mon Jul 30 15:31:50 2018 -0700

Change duplicate OVS bridge datapath-ids

The native OVS/ofctl controllers talk to the bridges using a
datapath-id, instead of the bridge name. The datapath ID is
auto-generated based on the MAC address of the bridge's NIC.
In the case where bridges are on VLAN interfaces, they would
have the same MACs, therefore the same datapath-id, causing
flows for one physical bridge to be programmed on each other.

The datapath-id is a 64-bit field, with lower 48 bits being
the MAC. We set the upper 12 unused bits to identify each
unique physical bridge

This could also be fixed manually using ovs-vsctl set, but
it might be beneficial to automate this in the code.

ovs-vsctl set bridge  other-config:datapath-id=

You can change this yourself using above command.

You can view/verify current datapath-id via

ovs-vsctl get Bridge br-vlan datapath-id
"6ea5a4b38a4a"

(please note that other-config is needed in the set, but not get)

Closes-Bug: #1697243
Co-Authored-By: Rodolfo Alonso Hernandez 

Change-Id: I575ddf0a66e2cfe745af3874728809cf54e37745


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1697243

Title:
  ovs bridge flow table is dropped by unkown cause

Status in neutron:
  Fix Released

Bug description:
  Hi,

  My openstack has a provider network with ovs bridge is "provision", it
  has been running fine but found it is network breakdown after several
  hours,I found it's flow table is empty.

  Is there a way to trace a bridge's flow table changement?

  [root@cloud-sz-master-b12-01 neutron]# ovs-ofctl dump-flows  provision 
  NXST_FLOW reply (xid=0x4):

  
  [root@cloud-sz-master-b12-02 nova]# ovs-ofctl dump-flows provision 
  NXST_FLOW reply (xid=0x4):
  [root@cloud-sz-master-b12-02 nova]# 
  [root@cloud-sz-master-b12-02 nova]# 
  [root@cloud-sz-master-b12-02 nova]# ip r
  ...
  10.53.33.0/24 dev proTvision  proto kernel  scope link  src 10.53.33.11 
  10.53.128.0/24 dev docker0  proto kernel  scope link  src 10.53.128.1 
  169.254.0.0/16 dev br-ex  scope link  metric 1055 
  169.254.0.0/16 dev provision  scope link  metric 1056 
  ...

  [root@cloud-sz-master-b12-02 nova]# ovs-ofctl show provision 
  OFPT_FEATURES_REPLY (xid=0x2): dpid:248a075541e8
  n_tables:254, n_buffers:256
  capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STAS ARP_MATCH_IP
  actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src 
mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
   1(bond0): addr:24:8a:07:55:41:e8
   config: 0
   state:  0
   speed: 0 Mbps now, 0 Mbps max
   2(phy-provision): addr:76:b5:88:cc:a6:74
   config: 0
   state:  0
   speed: 0 Mbps now, 0 Mbps max
   LOCAL(provision): addr:24:8a:07:55:41:e8
   config: 0
   state:  0
   speed: 0 Mbps now, 0 Mbps max
  OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

  [root@cloud-sz-master-b12-02 nova]# ifconfig bond0
  bond0: flags=5187  mtu 1500
  inet6 fe80::268a:7ff:fe55:41e8  prefixlen 64  scopeid 0x20
  ether 24:8a:07:55:41:e8  txqueuelen 1000  (Ethernet)
  RX packets 93588032  bytes 39646246456 (36.9 GiB)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 8655257217  bytes 27148795388 (25.2 GiB)
  TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

  [root@cloud-sz-master-b12-02 nova]# cat /proc/net/bonding/bond0 
  Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

  Bonding Mode: IEEE 802.3ad Dynamic link aggregation
  Transmit Hash Policy: layer2 (0)
  MII Status: up
  MII Polling Interval (ms): 0
  Up Delay (ms): 0
  Down Delay (ms): 0

  802.3ad info
  LACP rate: slow
  Min links: 0
  Aggregator selection policy (ad_select): stable
  System priority: 65535
  System MAC address: 24:8a:07:55:41:e8
  Active Aggregator Info:
  Aggregator ID: 19
  Number of ports: 2
  Actor Key: 13
  Partner Key: 11073
  Partner Mac Address: 38:bc:01:c2:26:a1

  Slave Interface: enp4s0f0
  MII Status: up
  Speed: 1 Mbps
  Duplex: full
  Link Failure Count: 0
  Permanent HW addr: 24:8a:07:55:41:e8
  Slave queue ID: 0
  Aggregator ID: 19
  Actor Churn State: none
  Partner Churn State: none
  Actor Churned Count: 0
  Partner Churned Count: 0
  details actor lacp pdu:
  system priority: 65535
  system mac address: 24:8a:07:55:41:e8
  port key: 13
 

[Yahoo-eng-team] [Bug 1813383] [NEW] opennebula: fail to sbuild, bash environment var failure EPOCHREALTIME

2019-01-25 Thread Chad Smith
Public bug reported:

unittests are failing during packaging of cloud-init on disco during an
sbuild due to failures in OpenNebula datasource unit tests.


Unit tests are now seeing EPOCHREALTIME values returned because those env 
values have changed across the unit test run.


OpenNebula datasource tries to exclude known bash -e env values that are known 
to change. and EPOCHREALTIME is one of the expected env variables that should 
continue to have a value delta.


==
FAIL: 
tests.unittests.test_datasource.test_opennebula.TestOpenNebulaDataSource.test_context_parser
--
Traceback (most recent call last):
  File "/<>/tests/unittests/test_datasource/test_opennebula.py", 
line 161, in test_context_parser
self.assertEqual(TEST_VARS, results['metadata'])
AssertionError: {'VAR1': 'single', 'VAR2': 'double word', '[207 chars] '$'} != 
{'EPOCHREALTIME': '1548476675.477863', 'VAR[245 chars]e\n'}
+ {'EPOCHREALTIME': '1548476675.477863',
- {'VAR1': 'single',
? ^

+  'VAR1': 'single',
? ^

   'VAR10': '\\',
   'VAR11': "'",
   'VAR12': '$',
   'VAR2': 'double word',
   'VAR3': 'multi\nline\n',
   'VAR4': "'single'",
   'VAR5': "'double word'",
   'VAR6': "'multi\nline\n'",
   'VAR7': 'single\\t',
   'VAR8': 'double\\tword',
   'VAR9': 'multi\\t\nline\n'}
 >> begin captured logging << 
cloudinit.util: DEBUG: Reading from 
/tmp/ci-TestOpenNebulaDataSource.ms6gmudd/seed/opennebula/context.sh 
(quiet=False)
cloudinit.util: DEBUG: Read 262 bytes from 
/tmp/ci-TestOpenNebulaDataSource.ms6gmudd/seed/opennebula/context.sh
cloudinit.util: DEBUG: Running command ['bash', '-e'] with allowed return codes 
[0] (shell=False, capture=True)
- >> end captured logging << -

==
FAIL: 
tests.unittests.test_datasource.test_opennebula.TestOpenNebulaDataSource.test_seed_dir_empty1_context
--
Traceback (most recent call last):
  File "/<>/tests/unittests/test_datasource/test_opennebula.py", 
line 140, in test_seed_dir_empty1_context
self.assertEqual(results['metadata'], {})
AssertionError: {'EPOCHREALTIME': '1548476675.848343'} != {}
- {'EPOCHREALTIME': '1548476675.848343'}
+ {}
 >> begin captured logging << 
cloudinit.util: DEBUG: Reading from 
/tmp/ci-TestOpenNebulaDataSource.gu1w3vu_/seed/opennebula/context.sh 
(quiet=False)
cloudinit.util: DEBUG: Read 0 bytes from 
/tmp/ci-TestOpenNebulaDataSource.gu1w3vu_/seed/opennebula/context.sh
cloudinit.util: DEBUG: Running command ['bash', '-e'] with allowed return codes 
[0] (shell=False, capture=True)
- >> end captured logging << -

==
FAIL: 
tests.unittests.test_datasource.test_opennebula.TestOpenNebulaDataSource.test_seed_dir_empty2_context
--
Traceback (most recent call last):
  File "/<>/tests/unittests/test_datasource/test_opennebula.py", 
line 147, in test_seed_dir_empty2_context
self.assertEqual(results['metadata'], {})
AssertionError: {'EPOCHREALTIME': '1548476675.863058'} != {}
- {'EPOCHREALTIME': '1548476675.863058'}
+ {}
 >> begin captured logging << 
cloudinit.util: DEBUG: Reading from 
/tmp/ci-TestOpenNebulaDataSource.b3f_3ztm/seed/opennebula/context.sh 
(quiet=False)
cloudinit.util: DEBUG: Read 44 bytes from 
/tmp/ci-TestOpenNebulaDataSource.b3f_3ztm/seed/opennebula/context.sh
cloudinit.util: DEBUG: Running command ['bash', '-e'] with allowed return codes 
[0] (shell=False, capture=True)
- >> end captured logging << -

==
FAIL: test_no_seconds 
(tests.unittests.test_datasource.test_opennebula.TestParseShellConfig)
--
Traceback (most recent call last):
  File "/<>/tests/unittests/test_datasource/test_opennebula.py", 
line 921, in test_no_seconds
self.assertEqual(ret, {"foo": "bar", "xx": "foo"})
AssertionError: {'foo': 'bar', 'xx': 'foo', 'EPOCHREALTIME': 
'1548476676.329965'} != {'foo': 'bar', 'xx': 'foo'}
- {'EPOCHREALTIME': '1548476676.329965', 'foo': 'bar', 'xx': 'foo'}
+ {'foo': 'bar', 'xx': 'foo'}
 >> begin captured logging << 
cloudinit.util: DEBUG: Running command ['bash', '-e'] with allowed return codes 
[0] (shell=False, capture=True)
- >> end captured logging << -

--
Ran 1897 tests in 20.582s

FAILED (SKIP=10, failures=4)
make[2]: *** 

[Yahoo-eng-team] [Bug 1801779] Re: Policy rule rule:create_port:fixed_ips:subnet_id doesn't allow non-admin to create port on specific subnet

2019-01-25 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1801779

Title:
  Policy rule rule:create_port:fixed_ips:subnet_id doesn't allow non-
  admin to create port on specific subnet

Status in neutron:
  Expired

Bug description:
  Running roughly master branch. According to pip,
  neutron==13.0.0.0rc2.dev324. I know that isn't super helpful from a
  dev perspective, but this is a kolla image and I don't have a great
  way to map this back to a SHA.

  Trying to create a port on a specific subnet on a shared network. I
  have the following policy rules, which seem to imply I should be able
  to do this:

  "create_port:fixed_ips": "rule:context_is_advsvc or 
rule:admin_or_network_owner",
  "create_port:fixed_ips:ip_address": "rule:context_is_advsvc or 
rule:admin_or_network_owner",
  "create_port:fixed_ips:subnet_id": "rule:context_is_advsvc or 
rule:admin_or_network_owner or rule:shared",

  Client logs here:
  https://gist.github.com/jimrollenhagen/82514bee47ad66e1e878c56d8fd66453

  Not much showing up in neutron-server.log, but can provide more info
  if needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1801779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1801738] Re: Assigning a FIP to NLBaaS VIP port doesn't affect Designate

2019-01-25 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1801738

Title:
  Assigning a FIP to NLBaaS VIP port doesn't affect Designate

Status in neutron:
  Expired

Bug description:
  While assigning a floating IP address to a NLBaaS VIP port, Neutron-Designate 
integration is expected to register the port at the DNS.
  However, the port is not registered by Designate as Neutron doesn't configure 
the FIP at the DNS as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1801738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813361] [NEW] disco: python37 unittest/tox support

2019-01-25 Thread Chad Smith
Public bug reported:

cloud-init's xenial toxenv falls over on tip of master 
7a4696596bbcccfedf5c6b6e25ad684ef30d9cea
in Ubuntu Disco python37 environments. Some of the tox dependencies like 
httpretty are exhibiting issues with python3.7

Make "all the tox things" work on Disco

type of errors seen running tox -r -e xenial on Disco

==
ERROR: 
tests.unittests.test_datasource.test_openstack.TestOpenStackDataSource.test_wb__crawl_metadata_does_not_persist
--
Traceback (most recent call last):
  File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 1055, in wrapper
return test(*args, **kw)
  File 
"/home/ubuntu/cloud-init/tests/unittests/test_datasource/test_openstack.py", 
line 395, in test_wb__crawl_metadata_does_not_persist
_register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
  File 
"/home/ubuntu/cloud-init/tests/unittests/test_datasource/test_openstack.py", 
line 126, in _register_uris
body=get_request_callback)
  File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 938, in register_uri
match_querystring)
  File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 760, in __init__
self.info = URIInfo.from_uri(uri, entries)
  File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 730, in from_uri
result = urlsplit(uri)
  File "/usr/lib/python3.7/urllib/parse.py", line 400, in urlsplit
url, scheme, _coerce_result = _coerce_args(url, scheme)
  File "/usr/lib/python3.7/urllib/parse.py", line 123, in _coerce_args
return _decode_args(args) + (_encode_result,)
  File "/usr/lib/python3.7/urllib/parse.py", line 107, in _decode_args
return tuple(x.decode(encoding, errors) if x else '' for x in args)
  File "/usr/lib/python3.7/urllib/parse.py", line 107, in 
return tuple(x.decode(encoding, errors) if x else '' for x in args)
AttributeError: 're.Pattern' object has no attribute 'decode'

--

Also of note: python27 interpreter isn't available on Disco, so we need
to allow tox to skip py27 env if not present.

** Affects: cloud-init
 Importance: Undecided
 Assignee: Chad Smith (chad.smith)
 Status: In Progress

** Changed in: cloud-init
   Status: New => In Progress

** Changed in: cloud-init
 Assignee: (unassigned) => Chad Smith (chad.smith)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1813361

Title:
  disco: python37 unittest/tox support

Status in cloud-init:
  In Progress

Bug description:
  cloud-init's xenial toxenv falls over on tip of master 
7a4696596bbcccfedf5c6b6e25ad684ef30d9cea
  in Ubuntu Disco python37 environments. Some of the tox dependencies like 
httpretty are exhibiting issues with python3.7

  Make "all the tox things" work on Disco

  type of errors seen running tox -r -e xenial on Disco

  ==
  ERROR: 
tests.unittests.test_datasource.test_openstack.TestOpenStackDataSource.test_wb__crawl_metadata_does_not_persist
  --
  Traceback (most recent call last):
    File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 1055, in wrapper
  return test(*args, **kw)
    File 
"/home/ubuntu/cloud-init/tests/unittests/test_datasource/test_openstack.py", 
line 395, in test_wb__crawl_metadata_does_not_persist
  _register_uris(self.VERSION, EC2_FILES, EC2_META, OS_FILES)
    File 
"/home/ubuntu/cloud-init/tests/unittests/test_datasource/test_openstack.py", 
line 126, in _register_uris
  body=get_request_callback)
    File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 938, in register_uri
  match_querystring)
    File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 760, in __init__
  self.info = URIInfo.from_uri(uri, entries)
    File 
"/home/ubuntu/cloud-init/.tox/xenial/lib/python3.7/site-packages/httpretty/core.py",
 line 730, in from_uri
  result = urlsplit(uri)
    File "/usr/lib/python3.7/urllib/parse.py", line 400, in urlsplit
  url, scheme, _coerce_result = _coerce_args(url, scheme)
    File "/usr/lib/python3.7/urllib/parse.py", line 123, in _coerce_args
  return _decode_args(args) + (_encode_result,)
    File "/usr/lib/python3.7/urllib/parse.py", line 107, in _decode_args
  return tuple(x.decode(encoding, errors) if x else '' for x in args)
    File "/usr/lib/python3.7/urllib/parse.py", line 107, in 
  return tuple(x.decode(encoding, errors) if x else 

[Yahoo-eng-team] [Bug 1808951] Re: python3 + Fedora + SSL + wsgi nova deployment, nova api returns RecursionError: maximum recursion depth exceeded while calling a Python object

2019-01-25 Thread Alex Schultz
Adding tripleo because this is affecting our fedora28 containers when
deployed via an undercloud with ssl enabled.

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Incomplete

** Changed in: tripleo
   Status: Incomplete => Triaged

** Changed in: tripleo
   Importance: Undecided => High

** Changed in: tripleo
Milestone: None => stein-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1808951

Title:
  python3 + Fedora + SSL + wsgi nova deployment, nova api returns
  RecursionError: maximum recursion depth exceeded while calling a
  Python object

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Triaged

Bug description:
  Description:-

  So while testing python3 with Fedora in [1], Found an issue while
  running nova-api behind wsgi. It fails with below Traceback:-

  2018-12-18 07:41:55.364 26870 INFO nova.api.openstack.requestlog 
[req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - - -] 127.0.0.1 "GET 
/v2.1/servers/detail?all_tenants=True=True" status: 500 len: 0 
microversion: - time: 0.007297
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack 
[req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - - -] Caught error: maximum 
recursion depth exceeded while calling a Python object: RecursionError: maximum 
recursion depth exceeded while calling a Python object
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack Traceback (most recent 
call last):
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/__init__.py", line 94, in 
__call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
req.get_response(self.application)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1277, in 
call_application
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **kw)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/requestlog.py", line 92, 
in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack self._log_req(req, 
res, start)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack 
self.force_reraise()
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack 
six.reraise(self.type_, self.value, self.tb)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/six.py", line 693, in reraise
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack raise value
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/requestlog.py", line 87, 
in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack res = 
req.get_response(self.application)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, 
catch_exc_info=False)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1277, in 
call_application
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 143, in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
resp(environ, start_response)
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__
  2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **kw)
  

[Yahoo-eng-team] [Bug 1813336] [NEW] Requesting a scoped token when using x509 authentication is redundant

2019-01-25 Thread Lance Bragstad
Public bug reported:

In order to get a project-scoped token with an x509 certificate (not
tokenless authentication), I need to specify X-Project-Id in the request
header and I need to specify the project in the scope of the request
body.

If I leave out the header (e.g., X-Project-Id) but keep the scope in the
request body, the request fails with an HTTP 400 validation error [1].
If I leave the request body unscoped and keep the X-Project-Id header in
the request, it is ignored an I get back an unscoped token [2].

It seems redundant to have to specify both to get a scoped token.

[0] https://pasted.tech/pastes/44d9393b0b01f40257fc216fec914ebb9bce07a6.raw
[1] https://pasted.tech/pastes/a41b17ec4c51bb57cb7625847544a75b97282585.raw
[2] https://pasted.tech/pastes/746cd35c00a6fd1c0d12a49ec1a705b4d0464b6a.raw

** Affects: keystone
 Importance: Medium
 Status: Triaged


** Tags: user-experience x509

** Tags added: x509

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => Medium

** Tags added: user-experience

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1813336

Title:
  Requesting a scoped token when using x509 authentication is redundant

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  In order to get a project-scoped token with an x509 certificate (not
  tokenless authentication), I need to specify X-Project-Id in the
  request header and I need to specify the project in the scope of the
  request body.

  If I leave out the header (e.g., X-Project-Id) but keep the scope in
  the request body, the request fails with an HTTP 400 validation error
  [1]. If I leave the request body unscoped and keep the X-Project-Id
  header in the request, it is ignored an I get back an unscoped token
  [2].

  It seems redundant to have to specify both to get a scoped token.

  [0] https://pasted.tech/pastes/44d9393b0b01f40257fc216fec914ebb9bce07a6.raw
  [1] https://pasted.tech/pastes/a41b17ec4c51bb57cb7625847544a75b97282585.raw
  [2] https://pasted.tech/pastes/746cd35c00a6fd1c0d12a49ec1a705b4d0464b6a.raw

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1813336/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813335] [NEW] x509 configured domains are redundant with auto-generated identity provider domains

2019-01-25 Thread Lance Bragstad
Public bug reported:

In order to set up x509 authentication, operators need to specify
trusted issuers in their keystone configuration [0] and they need to
specify a REMOTE_DOMAIN attribute through their chosen SSL library [1].
The REMOTE_DOMAIN is then passed into keystone via the request
environment and optionally used to namespace the user from REMOTE_USER.

Several releases ago, keystone merged support for auto-generating a
domain for each identity provider resource [2]. There is also support
for specifying a domain for an identity provider when creating it. The
purpose of this very similar to the REMOTE_DOMAIN from SSL, in that
federated users coming from a specific identity provider have a domain
for their user to be namespaced to.

If keystone can use the domain from the configured x509 identity
provider, then we might not need to have operators specify REMOTE_DOMAIN
in their apache configuration. This also means that users presenting
certificates from different trusted_issuers can be mapped into different
domains, instead of all being lumped into the REMOTE_DOMAIN.

[0] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/conf/tokenless_auth.py?id=e647d6f69762523d0dfa28137a9f11010b550e72#n18
[1] 
https://docs.openstack.org/keystone/latest/admin/external-authentication.html#configuration
[2] https://review.openstack.org/#/c/399684/

** Affects: keystone
 Importance: Low
 Status: Triaged


** Tags: x509

** Tags added: x509

** Changed in: keystone
   Status: New => Triaged

** Changed in: keystone
   Importance: Undecided => Medium

** Changed in: keystone
   Importance: Medium => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1813335

Title:
  x509 configured domains are redundant with auto-generated identity
  provider domains

Status in OpenStack Identity (keystone):
  Triaged

Bug description:
  In order to set up x509 authentication, operators need to specify
  trusted issuers in their keystone configuration [0] and they need to
  specify a REMOTE_DOMAIN attribute through their chosen SSL library
  [1]. The REMOTE_DOMAIN is then passed into keystone via the request
  environment and optionally used to namespace the user from
  REMOTE_USER.

  Several releases ago, keystone merged support for auto-generating a
  domain for each identity provider resource [2]. There is also support
  for specifying a domain for an identity provider when creating it. The
  purpose of this very similar to the REMOTE_DOMAIN from SSL, in that
  federated users coming from a specific identity provider have a domain
  for their user to be namespaced to.

  If keystone can use the domain from the configured x509 identity
  provider, then we might not need to have operators specify
  REMOTE_DOMAIN in their apache configuration. This also means that
  users presenting certificates from different trusted_issuers can be
  mapped into different domains, instead of all being lumped into the
  REMOTE_DOMAIN.

  [0] 
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/conf/tokenless_auth.py?id=e647d6f69762523d0dfa28137a9f11010b550e72#n18
  [1] 
https://docs.openstack.org/keystone/latest/admin/external-authentication.html#configuration
  [2] https://review.openstack.org/#/c/399684/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1813335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1804522] Re: Service provider API doesn't use default roles

2019-01-25 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/620158
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=7ce5e3e24e8291c0af387a72ce7b47c3b28a9f74
Submitter: Zuul
Branch:master

commit 7ce5e3e24e8291c0af387a72ce7b47c3b28a9f74
Author: Lance Bragstad 
Date:   Mon Nov 26 20:43:09 2018 +

Update service provider  policies for system admin

This change makes the policy definitions for admin service
provider operations consistent with the other service provider
policies. Subsequent patches will incorporate:

 - domain users test coverage
 - project users test coverage

Change-Id: I621192f089d1b29e2585d0030716348274e50bf1
Related-Bug: 1804520
Closes-Bug: 1804522


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1804522

Title:
  Service provider API doesn't use default roles

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In Rocky, keystone implemented support to ensure at least three
  default roles were available [0]. The service provider (federation)
  API doesn't incorporate these defaults into its default policies [1],
  but it should.

  [0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html
  [1] 
https://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/service_provider.py?id=fb73912d87b61c419a86c0a9415ebdcf1e186927

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1804522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813297] [NEW] Horizon cannot login while Keyston's endpoint number over 60

2019-01-25 Thread summer
Public bug reported:

Hello:
 My cluster is Five region a month ago, recently i a add a new region and  
encounter a problem that the horizon cannot login. and i notice that happens 
only when the endpoints number over 60.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1813297

Title:
  Horizon cannot login while Keyston's endpoint number over 60

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hello:
   My cluster is Five region a month ago, recently i a add a new region and 
 encounter a problem that the horizon cannot login. and i notice that happens 
only when the endpoints number over 60.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1813297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813278] [NEW] Race during adding and updating same port in L3 agent's info can generate wrong radvd config file

2019-01-25 Thread Slawek Kaplonski
Public bug reported:

There is possibility that because of some race in processing adding/updating 
internal ports info in RouterInfo class, same port, with 2 different revisions 
and different subnets configured will be added to RouterInfo.internal_ports 
twice in RouterInfo._process_internal_ports method 
(https://github.com/openstack/neutron/blob/master/neutron/agent/l3/router_info.py#L544).
If ports have got IPv6 gateway configured and radvd daemon should be started 
for such router, this may lead to generate radvd config file with duplicate 
interfaces, like:

interface qr-29c030a8-26
{
   AdvSendAdvert on;
   MinRtrAdvInterval 30;
   MaxRtrAdvInterval 100;
   AdvLinkMTU 1500;
   AdvOtherConfigFlag on;

   prefix 2003:0:0:1::/64
   {
AdvOnLink on;
AdvAutonomous on;
   };

   prefix 2003::/64
   {
AdvOnLink on;
AdvAutonomous on;
   };
};interface qr-29c030a8-26
{
   AdvSendAdvert on;
   MinRtrAdvInterval 30;
   MaxRtrAdvInterval 100;
   AdvLinkMTU 1500;

   AdvOtherConfigFlag on;

   prefix 2003::/64
   {
AdvOnLink on;
AdvAutonomous on;
   };
};


In some cases this may lead to crash radvd daemon. See also 
https://bugzilla.redhat.com/show_bug.cgi?id=1630167 for more details.

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813278

Title:
  Race during adding and updating same port in L3 agent's info can
  generate wrong radvd config file

Status in neutron:
  Confirmed

Bug description:
  There is possibility that because of some race in processing adding/updating 
internal ports info in RouterInfo class, same port, with 2 different revisions 
and different subnets configured will be added to RouterInfo.internal_ports 
twice in RouterInfo._process_internal_ports method 
(https://github.com/openstack/neutron/blob/master/neutron/agent/l3/router_info.py#L544).
  If ports have got IPv6 gateway configured and radvd daemon should be started 
for such router, this may lead to generate radvd config file with duplicate 
interfaces, like:

  interface qr-29c030a8-26
  {
 AdvSendAdvert on;
 MinRtrAdvInterval 30;
 MaxRtrAdvInterval 100;
 AdvLinkMTU 1500;
 AdvOtherConfigFlag on;

 prefix 2003:0:0:1::/64
 {
  AdvOnLink on;
  AdvAutonomous on;
 };

 prefix 2003::/64
 {
  AdvOnLink on;
  AdvAutonomous on;
 };
  };interface qr-29c030a8-26
  {
 AdvSendAdvert on;
 MinRtrAdvInterval 30;
 MaxRtrAdvInterval 100;
 AdvLinkMTU 1500;

 AdvOtherConfigFlag on;

 prefix 2003::/64
 {
  AdvOnLink on;
  AdvAutonomous on;
 };
  };

  
  In some cases this may lead to crash radvd daemon. See also 
https://bugzilla.redhat.com/show_bug.cgi?id=1630167 for more details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813279] [NEW] Race during adding and updating same port in L3 agent's info can generate wrong radvd config file

2019-01-25 Thread Slawek Kaplonski
Public bug reported:

There is possibility that because of some race in processing adding/updating 
internal ports info in RouterInfo class, same port, with 2 different revisions 
and different subnets configured will be added to RouterInfo.internal_ports 
twice in RouterInfo._process_internal_ports method 
(https://github.com/openstack/neutron/blob/master/neutron/agent/l3/router_info.py#L544).
If ports have got IPv6 gateway configured and radvd daemon should be started 
for such router, this may lead to generate radvd config file with duplicate 
interfaces, like:

interface qr-29c030a8-26
{
   AdvSendAdvert on;
   MinRtrAdvInterval 30;
   MaxRtrAdvInterval 100;
   AdvLinkMTU 1500;
   AdvOtherConfigFlag on;

   prefix 2003:0:0:1::/64
   {
AdvOnLink on;
AdvAutonomous on;
   };

   prefix 2003::/64
   {
AdvOnLink on;
AdvAutonomous on;
   };
};interface qr-29c030a8-26
{
   AdvSendAdvert on;
   MinRtrAdvInterval 30;
   MaxRtrAdvInterval 100;
   AdvLinkMTU 1500;

   AdvOtherConfigFlag on;

   prefix 2003::/64
   {
AdvOnLink on;
AdvAutonomous on;
   };
};


In some cases this may lead to crash radvd daemon. See also 
https://bugzilla.redhat.com/show_bug.cgi?id=1630167 for more details.

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813279

Title:
  Race during adding and updating same port in L3 agent's info can
  generate wrong radvd config file

Status in neutron:
  Confirmed

Bug description:
  There is possibility that because of some race in processing adding/updating 
internal ports info in RouterInfo class, same port, with 2 different revisions 
and different subnets configured will be added to RouterInfo.internal_ports 
twice in RouterInfo._process_internal_ports method 
(https://github.com/openstack/neutron/blob/master/neutron/agent/l3/router_info.py#L544).
  If ports have got IPv6 gateway configured and radvd daemon should be started 
for such router, this may lead to generate radvd config file with duplicate 
interfaces, like:

  interface qr-29c030a8-26
  {
 AdvSendAdvert on;
 MinRtrAdvInterval 30;
 MaxRtrAdvInterval 100;
 AdvLinkMTU 1500;
 AdvOtherConfigFlag on;

 prefix 2003:0:0:1::/64
 {
  AdvOnLink on;
  AdvAutonomous on;
 };

 prefix 2003::/64
 {
  AdvOnLink on;
  AdvAutonomous on;
 };
  };interface qr-29c030a8-26
  {
 AdvSendAdvert on;
 MinRtrAdvInterval 30;
 MaxRtrAdvInterval 100;
 AdvLinkMTU 1500;

 AdvOtherConfigFlag on;

 prefix 2003::/64
 {
  AdvOnLink on;
  AdvAutonomous on;
 };
  };

  
  In some cases this may lead to crash radvd daemon. See also 
https://bugzilla.redhat.com/show_bug.cgi?id=1630167 for more details.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813265] [NEW] Documentation should use endpoints with path /identity instead of port 5000

2019-01-25 Thread Colleen Murphy
Public bug reported:

In devstack we configure keystone to run on port 80/443 proxied through
the /identity URL path. We semi-officially recommend doing the same in
production, but all of our documentation points to using port 5000 with
no path. We should update the documentation to use the recommended
endpoint configuration.

Note that keystone and horizon are commonly co-located and horizon by
default runs on port 80/443 with no URL path, so the documentation will
need to explain how to configure apache/nginx/haproxy such that horizon
and keystone don't collide.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1813265

Title:
  Documentation should use endpoints with path /identity instead of port
  5000

Status in OpenStack Identity (keystone):
  New

Bug description:
  In devstack we configure keystone to run on port 80/443 proxied
  through the /identity URL path. We semi-officially recommend doing the
  same in production, but all of our documentation points to using port
  5000 with no path. We should update the documentation to use the
  recommended endpoint configuration.

  Note that keystone and horizon are commonly co-located and horizon by
  default runs on port 80/443 with no URL path, so the documentation
  will need to explain how to configure apache/nginx/haproxy such that
  horizon and keystone don't collide.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1813265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp