[Yahoo-eng-team] [Bug 1886298] Re: Few of the lower constraints are not compatible with python3.8

2020-08-14 Thread Vishal Manchanda
** Also affects: horizon
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1886298

Title:
  Few of the lower constraints are not compatible with python3.8

Status in castellan:
  New
Status in ec2-api:
  New
Status in futurist:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in manila-ui:
  New
Status in masakari:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in os-win:
  New
Status in oslo.messaging:
  New
Status in oslo.policy:
  New
Status in oslo.privsep:
  New
Status in oslo.reports:
  New
Status in oslo.vmware:
  New
Status in Glance Client:
  New
Status in python-keystoneclient:
  New
Status in python-manilaclient:
  New
Status in python-novaclient:
  In Progress
Status in python-senlinclient:
  New
Status in python-troveclient:
  New
Status in python-watcherclient:
  New
Status in Solum:
  New
Status in tacker:
  New
Status in taskflow:
  New
Status in watcher:
  New

Bug description:
  lower constraint are being tested with python.6 till now and jobs
  running fine. With the migration of testing to ubuntu focal where
  python3.8 is default, lower-constraints job started failing due to
  multiple issues.

  For example,

  Markupsafe 1.0 not compatible with new setuptools:
  - https://github.com/pallets/markupsafe/issues/116

  paramiko 2.7.1 fixed the compatiblity for python3.7 onwards:
  https://github.com/paramiko/paramiko/issues/1108

  greenlet 0.4.15 added wheels for python 3.8:
  https://github.com/python-greenlet/greenlet/issues/151

  numpy 1.19.1 added python 3.8 support and testing:
  https://github.com/numpy/numpy/pull/14775

  paramiko 2.7.1 fixed the compatibility for python3.7 onwards:
  
https://github.com/paramiko/paramiko/commit/4753881223e0ff5e3b3be35bb687a18dfec4f672

  Similarly there are many dependencies which added the python3.8
  support in their later version. So we need to bump their lower
  constraints to compatible version.

  Approach to identify the required bump is by running lower-constraint job on 
Focal and star bumping for the failed things. I started with nova repos
  and found below version bump:

  For Nova:
  Markupsafe==1.1.1
  cffi==1.14.0
  greenlet==0.4.15
  PyYAML==3.13
  lxml==4.5.0
  numpy==1.19.0
  psycopg2==2.8
  paramiko==2.7.1

  For python-novaclient:
  Markupsafe==1.1.1
  cffi==1.14.0
  greenlet==0.4.15
  PyYAML==3.13

  For os-vif:
  Markupsafe==1.1.1
  cffi==1.14.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/castellan/+bug/1886298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1891615] [NEW] AttributeError: module 'horizon.tables' has no attribute 'PagedTableWithPageMenu'

2020-08-14 Thread men
Public bug reported:

My Linux system can only be installed with source code

git clone https://gitee.com/menkeyi/horizon.git  -b stable/train
[root@controller ~]# cd horizon/
[root@controller horizon]# pip3 install -r requirements.txt
[root@controller horizon]# python3 setup.py install

[root@controller01 openstack-dashboard]# ll
total 8
-rwxr-xr-x  1 root apache  830 Aug  5 09:09 manage.py
drwxr-xr-x 20 root apache 4096 Aug  5 09:08 openstack_dashboard
-rw-r--r--  1 root root  0 Aug  5 14:40 README.md
drwxr-xr-x 10 root apache  114 Aug  5 09:12 static
[root@controller01 openstack-dashboard]# ls openstack_dashboard/
apidefaults.py__init__.pypolicy.py
templatetags   usage
conf   django_pyscss_fix  karma.conf.js  __pycache__  test  
 utils
context_processors.py  enabledlocal  settings.py  themes
 views.py
contribexceptions.py  locale static   
theme_settings.py  wsgi
dashboards hooks.py   management templatesurls.py   
 wsgi.py

It's strange to encounter this problem. I don't know how to investigate
it?

[root@controller01 openstack-dashboard]# python3 manage.py  runserver
/usr/local/lib64/python3.7/site-packages/scss/namespace.py:172: 
DeprecationWarning: inspect.getargspec() is deprecated since Python 3.0, use 
inspect.signature() or inspect.getfullargspec()
  argspec = inspect.getargspec(function)
/usr/local/lib64/python3.7/site-packages/scss/selector.py:54: FutureWarning: 
Possible nested set at position 329
  ''', re.VERBOSE | re.MULTILINE)
/usr/local/lib64/python3.7/site-packages/scss/namespace.py:172: 
DeprecationWarning: inspect.getargspec() is deprecated since Python 3.0, use 
inspect.signature() or inspect.getfullargspec()
  argspec = inspect.getargspec(function)
/usr/local/lib64/python3.7/site-packages/scss/selector.py:54: FutureWarning: 
Possible nested set at position 329
  ''', re.VERBOSE | re.MULTILINE)
Performing system checks...

Unhandled exception in thread started by .wrapper at 0xfffc08045d40>
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", 
line 225, in wrapper
fn(*args, **kwargs)
  File 
"/usr/local/lib/python3.7/site-packages/django/core/management/commands/runserver.py",
 line 120, in inner_run
self.check(display_num_errors=True)
  File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", 
line 364, in check
include_deployment_checks=include_deployment_checks,
  File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", 
line 351, in _run_checks
return checks.run_checks(**kwargs)
  File "/usr/local/lib/python3.7/site-packages/django/core/checks/registry.py", 
line 73, in run_checks
new_errors = check(app_configs=app_configs)
  File "/usr/local/lib/python3.7/site-packages/django/core/checks/urls.py", 
line 13, in check_url_config
return check_resolver(resolver)
  File "/usr/local/lib/python3.7/site-packages/django/core/checks/urls.py", 
line 23, in check_resolver
return check_method()
  File "/usr/local/lib/python3.7/site-packages/django/urls/resolvers.py", line 
400, in check
warnings.extend(check_resolver(pattern))
  File "/usr/local/lib/python3.7/site-packages/django/core/checks/urls.py", 
line 23, in check_resolver
return check_method()
  File "/usr/local/lib/python3.7/site-packages/django/urls/resolvers.py", line 
399, in check
for pattern in self.url_patterns:
  File "/usr/local/lib/python3.7/site-packages/django/utils/functional.py", 
line 36, in __get__
res = instance.__dict__[self.name] = self.func(instance)
  File "/usr/local/lib/python3.7/site-packages/django/urls/resolvers.py", line 
542, in url_patterns
iter(patterns)
  File "/usr/local/lib/python3.7/site-packages/horizon/base.py", line 692, in 
__iter__
self._setup()
  File "/usr/local/lib/python3.7/site-packages/django/utils/functional.py", 
line 349, in _setup
self._wrapped = self._setupfunc()
  File "/usr/local/lib/python3.7/site-packages/horizon/base.py", line 858, in 
url_patterns
return self._urls()[0]
  File "/usr/local/lib/python3.7/site-packages/horizon/base.py", line 892, in 
_urls
_wrapped_include(dash._decorated_urls)))
  File "/usr/local/lib/python3.7/site-packages/horizon/base.py", line 568, in 
_decorated_urls
_wrapped_include(panel._decorated_urls)))
  File "/usr/local/lib/python3.7/site-packages/horizon/base.py", line 340, in 
_decorated_urls
urlpatterns = self._get_default_urlpatterns()
  File "/usr/local/lib/python3.7/site-packages/horizon/base.py", line 142, in 
_get_default_urlpatterns
urls_mod = import_module('.urls', package_string)
  File "/usr/lib64/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
  File "", line 1006, in _gcd_import
  File "", line 983, in _find_and_load
  Fil

[Yahoo-eng-team] [Bug 1889781] Re: Functional tests are timing out

2020-08-14 Thread Slawek Kaplonski
I see now that it happens also in our current functional tests jobs,
which are run on Ubuntu 18.04, like e.g.
https://1ecd386c312cee0f5e31-dcaa487a47c3d7a82f096f3792363793.ssl.cf1.rackcdn.com/745641/2/check
/neutron-functional-with-uwsgi/8401f85/job-output.txt

** Summary changed:

- Functional tests on Ubuntu 20.04 are timed out
+ Functional tests are timing out

** Changed in: neutron
   Status: Won't Fix => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1889781

Title:
  Functional tests are timing out

Status in neutron:
  Confirmed

Bug description:
  Probably due to very large number of logs it's timeouting at some
  point. We had similar issues in the past and we limited some WARNING
  messages to not be logged to workaround the issue.

  Example of job's logs:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_117/734304/9/check
  /neutron-functional/11781bb/job-output.txt

  Most likely problem is caused by logs like below. And those tests are
  passing.

  2020-07-29 21:56:37.633888 | controller | The above exception was the direct 
cause of the following exception:
  2020-07-29 21:56:37.633897 | controller |
  2020-07-29 21:56:37.633909 | controller | Traceback (most recent call last):
  2020-07-29 21:56:37.633918 | controller |   File 
"/usr/lib/python3.8/contextlib.py", line 131, in __exit__
  2020-07-29 21:56:37.633927 | controller | self.gen.throw(type, value, 
traceback)
  2020-07-29 21:56:37.633936 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 1064, in _transaction_scope
  2020-07-29 21:56:37.633946 | controller | yield resource
  2020-07-29 21:56:37.633956 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/db/ovn_hash_ring_db.py", 
line 42, in remove_nodes_from_host
  2020-07-29 21:56:37.633965 | controller | 
context.session.query(ovn_models.OVNHashRing).filter(
  2020-07-29 21:56:37.633974 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/orm/query.py",
 line 3894, in delete
  2020-07-29 21:56:37.633984 | controller | delete_op.exec_()
  2020-07-29 21:56:37.633994 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py",
 line 1697, in exec_
  2020-07-29 21:56:37.634004 | controller | self._do_exec()
  2020-07-29 21:56:37.634013 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py",
 line 1928, in _do_exec
  2020-07-29 21:56:37.634023 | controller | self._execute_stmt(delete_stmt)
  2020-07-29 21:56:37.634032 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py",
 line 1702, in _execute_stmt
  2020-07-29 21:56:37.634042 | controller | self.result = 
self.query._execute_crud(stmt, self.mapper)
  2020-07-29 21:56:37.634050 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/orm/query.py",
 line 3536, in _execute_crud
  2020-07-29 21:56:37.634059 | controller | return conn.execute(stmt, 
self._params)
  2020-07-29 21:56:37.634069 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/engine/base.py",
 line 1014, in execute
  2020-07-29 21:56:37.634079 | controller | return meth(self, multiparams, 
params)
  2020-07-29 21:56:37.634088 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/sql/elements.py",
 line 298, in _execute_on_connection
  2020-07-29 21:56:37.634098 | controller | return 
connection._execute_clauseelement(self, multiparams, params)
  2020-07-29 21:56:37.634121 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/engine/base.py",
 line 1127, in _execute_clauseelement
  2020-07-29 21:56:37.634132 | controller | ret = self._execute_context(
  2020-07-29 21:56:37.634142 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/engine/base.py",
 line 1317, in _execute_context
  2020-07-29 21:56:37.634150 | controller | self._handle_dbapi_exception(
  2020-07-29 21:56:37.634163 | controller |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/engine/base.py",
 line 1509, in _handle_dbapi_exception
  2020-07-29 21:56:37.634172 | controller |   

[Yahoo-eng-team] [Bug 1891346] Re: Cannot delete nova-compute service due to service ID conflict

2020-08-14 Thread John Garbutt
** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1891346

Title:
  Cannot delete nova-compute service due to service ID conflict

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  I am trying to delete a nova-compute service for a retired hypervisor:

  $ openstack compute service delete 124
   
  Failed to delete compute service with ID '124': Service id 124 refers to 
multiple services. (HTTP 400) (Request-ID: 
req-05e01880-237c-4efd-8c54-2899ccbf7316)
  1 of 1 compute services failed to delete.

  This is caused by a conflicting service with the same ID in
  nova_cell0:

  MariaDB [nova_cell0]> SELECT * FROM services WHERE id = 124;
  
+-+++-+---+---+---+--+--+-+-+--+-+-+--+
  | created_at  | updated_at | deleted_at | id  | host  | 
binary| topic | report_count | disabled | deleted | disabled_reason | 
last_seen_up | forced_down | version | uuid |
  
+-+++-+---+---+---+--+--+-+-+--+-+-+--+
  | 2020-05-27 18:43:34 | NULL   | NULL   | 124 | 172.16.52.246 | 
nova-metadata | NULL  |0 |0 |   0 | NULL| 
NULL |   0 |  40 | cb03be2c-cd62-4d48-a2eb-424df70862c5 |
  
+-+++-+---+---+---+--+--+-+-+--+-+-+--+

  This service in cell0 appears to have been created at the time of an
  upgrade from Stein to Train.

  Environment
  ===

  python2-novaclient-15.1.0-1.el7.noarch
  python2-nova-20.2.0-1.el7.noarch
  openstack-nova-api-20.2.0-1.el7.noarch
  openstack-nova-common-20.2.0-1.el7.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1891346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890539] Re: failed to create port with security group of other tenant

2020-08-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/745089
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=cc54a1c38e0b537883de43fecda781034c80daf3
Submitter: Zuul
Branch:master

commit cc54a1c38e0b537883de43fecda781034c80daf3
Author: zhanghao 
Date:   Thu Aug 6 04:02:36 2020 -0400

Fix port can not be created with the sg of other project

This patch adds the verification of whether admin context when
verifying the valid security groups of port.

Change-Id: I2674bdc448d9a091b9fe8c68f0866fd19141c6be
Closes-Bug: #1890539


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1890539

Title:
  failed to create port with security group of other tenant

Status in neutron:
  Fix Released

Bug description:
  How to reproduce this problem:
  1.source demo-openrc
  2.openstack security group create sg001
  3.source admin-openrc
  4.openstack port create port001 --network net10 --security-group sg001  Failed
prompt the following error:
Security group sg001_id does not exist
  5.openstack port create port002 --network net10
openstack port set port002 --security-group sg001OK
port002 security group ids include sg001_id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1890539/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890803] Re: Groovy amd64 / arm64 / PowerPC deployment seems not working

2020-08-14 Thread Scott Moser
Fixed in groovy at cloud-initramfs-tools 0.46ubuntu1

** Also affects: cloud-initramfs-tools (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-initramfs-tools (Ubuntu)
   Importance: Undecided => High

** Changed in: cloud-initramfs-tools (Ubuntu)
   Status: New => Fix Released

** Changed in: cloud-initramfs-tools (Ubuntu)
 Assignee: (unassigned) => Scott Moser (smoser)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1890803

Title:
  Groovy amd64 / arm64 / PowerPC deployment seems not working

Status in cloud-init:
  Invalid
Status in cloud-initramfs-tools:
  Fix Committed
Status in MAAS:
  Triaged
Status in ubuntu-kernel-tests:
  In Progress
Status in cloud-initramfs-tools package in Ubuntu:
  Fix Released

Bug description:
  All bare-metal deployment tasks have failed on our bare-metal maas
  server and the PowerMAAS / HyperMAAS (image synced on maas server).

  Deployment failed with:
    Installation was aborted.

  Need to further investigate this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1890803/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1891673] [NEW] qrouter ns ip rules not deleted when fip removed from vm

2020-08-14 Thread Edward Hope-Morley
Public bug reported:

With Bionic Stein using dvr_snat if I add a floating ip to a vm then
remove the floating ip, the corresponding ip rules in the associated
qrouter ns local to the instance are not deleted which results in no
longer being able to reach the external network because packets are
still sent to the fip namespace (via rfp-/fpr-) e.g. in my compute host
running a vm whose address is 192.168.21.28 for which i have removed the
fip I still see:

# ip netns exec qrouter-5e45608f-33d4-41bf-b3ba-915adf612e65 ip rule list
0:  from all lookup local 
32765:  from 192.168.21.28 lookup 16 
32766:  from all lookup main 
32767:  from all lookup default 
3232240897: from 192.168.21.1/24 lookup 3232240897 
3232241231: from 192.168.22.79/24 lookup 3232241231

And table 16 leads to:

# ip netns exec qrouter-5e45608f-33d4-41bf-b3ba-915adf612e65 ip route show 
table 16
default via 169.254.109.249 dev rfp-5e45608f-3

Which results in the instance no longer being able to reach the external
network (packets are never sent to the snat- ns in my case).

The workaround is to delete that ip rule but neutron should be taking
care of this. Looks like the culprit is in
neutron/agent/l3/dvr_local_router.py:floating_ip_removed_dist

Note that the NAT rules were successfully removed from iptables so looks
like it is just this bit that is left behind.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: sts

** Tags added: sts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1891673

Title:
  qrouter ns ip rules not deleted when fip removed from vm

Status in neutron:
  New

Bug description:
  With Bionic Stein using dvr_snat if I add a floating ip to a vm then
  remove the floating ip, the corresponding ip rules in the associated
  qrouter ns local to the instance are not deleted which results in no
  longer being able to reach the external network because packets are
  still sent to the fip namespace (via rfp-/fpr-) e.g. in my compute
  host running a vm whose address is 192.168.21.28 for which i have
  removed the fip I still see:

  # ip netns exec qrouter-5e45608f-33d4-41bf-b3ba-915adf612e65 ip rule list
  0:  from all lookup local 
  32765:  from 192.168.21.28 lookup 16 
  32766:  from all lookup main 
  32767:  from all lookup default 
  3232240897: from 192.168.21.1/24 lookup 3232240897 
  3232241231: from 192.168.22.79/24 lookup 3232241231

  And table 16 leads to:

  # ip netns exec qrouter-5e45608f-33d4-41bf-b3ba-915adf612e65 ip route show 
table 16
  default via 169.254.109.249 dev rfp-5e45608f-3

  Which results in the instance no longer being able to reach the
  external network (packets are never sent to the snat- ns in my case).

  The workaround is to delete that ip rule but neutron should be taking
  care of this. Looks like the culprit is in
  neutron/agent/l3/dvr_local_router.py:floating_ip_removed_dist

  Note that the NAT rules were successfully removed from iptables so
  looks like it is just this bit that is left behind.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1891673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp