[Yahoo-eng-team] [Bug 1714251] Re: router_centralized_snat not removed when router migrated from DVR to HA

2017-09-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/501717
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=777fb2af455eea31f9c6ea1f0e260ee8d4d0dbd7
Submitter: Jenkins
Branch:master

commit 777fb2af455eea31f9c6ea1f0e260ee8d4d0dbd7
Author: venkata anil 
Date:   Thu Sep 7 12:36:05 2017 +

Remove csnat port when DVR migrated to non-DVR

When a router is migrated from DVR to HA or DVR to centralized router,
router_centralized_snat port still exists in DB. When the router is no
more a DVR router, this port is useless and has to be removed from DB.

This patch removes router_centralized_snat port when a router is
migrated from DVR to other modes.

Closes-Bug: 1714251
Change-Id: I124514d021ff8539ac3a628907cb49611ef66d08


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714251

Title:
  router_centralized_snat not removed when router migrated from DVR to
  HA

Status in neutron:
  Fix Released

Bug description:
  When a router is migrated from DVR to HA, all ports related to DVR
  should be removed. But I still see port with device_owner
  router_centralized_snat not removed.

  Steps to reproduce:
  1) create a network n1, and subnet on this network with name sn1
  2) create a DVR, attach it to sn1 through router interface add and set 
gateway(router-gateway-set public)
  3) boot a vm on n1 and associate a floating ip
  4) set admin-state to False i.e neutron router-update --admin-state-up False 

  5) Now update the router to HA router i.e
 neutron router-update --distributed=False --ha=True 
  6) neutron port-list and also
 "select * from ports where device_id="router-id";
 will show this "network:router_centralized_snat" port

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1714251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510234] Re: Heartbeats stop when time is changed

2017-09-13 Thread Dinesh Bhor
** Also affects: masakari
   Importance: Undecided
   Status: New

** Changed in: masakari
 Assignee: (unassigned) => Dinesh Bhor (dinesh-bhor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1510234

Title:
  Heartbeats stop when time is changed

Status in masakari:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.service:
  Fix Released

Bug description:
  Heartbeats stop working when you mess with the system time. If a
  monotonic clock were used, they would continue to work when the system
  time was changed.

  Steps to reproduce:

  1. List the nova services ('nova-manage service list'). Note that the
  'State' for each services is a happy face ':-)'.

  2. Move the time ahead (for example 2 hours in the future), and then
  list the nova services again. Note that heartbeats continue to work
  and use the future time (see 'Updated_At').

  3. Revert back to the actual time, and list the nova services again.
  Note that all heartbeats stop, and have a 'State' of 'XXX'.

  4. The heartbeats will start again in 2 hours when the actual time
  catches up to the future time, or if you restart the services.

  5. You'll see a log message like the following when the heartbeats
  stop:

  2015-10-26 17:14:10.538 DEBUG nova.servicegroup.drivers.db [req-
  c41a2ad7-e5a5-4914-bdc8-6c1ca8b224c6 None None] Seems service is down.
  Last heartbeat was 2015-10-26 17:20:20. Elapsed time is -369.461679
  from (pid=13994) is_up
  /opt/stack/nova/nova/servicegroup/drivers/db.py:80

  Here's example output demonstrating the issue:

  http://paste.openstack.org/show/477404/

  See bug #1450438 for more context:

  https://bugs.launchpad.net/oslo.service/+bug/1450438

  Long story short: looping call is using the built-in time rather than
  a  monotonic clock for sleeps.

  
https://github.com/openstack/oslo.service/blob/3d79348dae4d36bcaf4e525153abf74ad4bd182a/oslo_service/loopingcall.py#L122

  Oslo Service: version 0.11
  Nova: master (commit 2c3f9c339cae24576fefb66a91995d6612bb4ab2)

To manage notifications about this bug go to:
https://bugs.launchpad.net/masakari/+bug/1510234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716746] Re: functional job broken by new os-testr

2017-09-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/503793
Committed: 
https://git.openstack.org/cgit/openstack/networking-bagpipe/commit/?id=c6a9bcc34b63c927a37a0596f4bd8c1b0292cff2
Submitter: Jenkins
Branch:master

commit c6a9bcc34b63c927a37a0596f4bd8c1b0292cff2
Author: Thomas Morin 
Date:   Wed Sep 13 13:45:56 2017 -0600

Fix post gate hook to accommodate for new os-testr

New os-testr uses stestr under the hood, which creates .stestr but not
.testrepository directory in the current dir. Other than that, it
doesn't seem like there is any difference in the format or names of files
generated in the directory.

(shamelessly stolen from I82d52bf0ad885bd36d2f0782a7c86ac61df532f2)
Co-Authored-By: Ihar Hrachyshka 

Change-Id: Ieee4bf6e3399b1f496850a23ba2e135b61b03f27
Closes-Bug: 1716746


** Changed in: networking-bagpipe
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716746

Title:
  functional job broken by new os-testr

Status in networking-bgpvpn:
  In Progress
Status in BaGPipe:
  Fix Released
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released

Bug description:
  functional job fails with:

  2017-09-12 16:09:20.705975 | 2017-09-12 16:09:20.705 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L67:   
testr_exit_code=0
  2017-09-12 16:09:20.707372 | 2017-09-12 16:09:20.706 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L68:   set 
-e
  2017-09-12 16:09:20.718005 | 2017-09-12 16:09:20.717 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L71:   
generate_testr_results
  2017-09-12 16:09:20.719619 | 2017-09-12 16:09:20.719 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L12:
   sudo -H -u stack chmod o+rw .
  2017-09-12 16:09:20.720974 | 2017-09-12 16:09:20.720 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L13:
   sudo -H -u stack chmod o+rw -R .testrepository
  2017-09-12 16:09:20.722284 | 2017-09-12 16:09:20.721 | chmod: cannot access 
'.testrepository': No such file or directory

  This is because new os-testr switched to stestr that has a different
  name for the directory (.stestr).

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1716746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717149] [NEW] no need to discover panel in _autodiscover in Site class

2017-09-13 Thread chaoliu
Public bug reported:

no need to discover panel in Site class since the job is done by
_audodiscover() method in Dashboard class

** Affects: horizon
 Importance: Undecided
 Assignee: chaoliu (liuchao)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => chaoliu (liuchao)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1717149

Title:
  no need to discover panel in _autodiscover in Site class

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  no need to discover panel in Site class since the job is done by
  _audodiscover() method in Dashboard class

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1717149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717147] [NEW] cloud-init 0.7.9 fails for CentOS 7.4 in Cloudstack

2017-09-13 Thread Ian Forde
Public bug reported:

Environment:
CentOS 7.4, cloud-init-0.7.9-9.el7.centos.2.x86_64

Problem (quick):
CentOS 7.4 builds on Cloudstack 4.8 don't run cloud-init because the newer 
version of cloud-init doesn't appear to like the way the dhclient lease file is 
named.

Problem (long):

I've just built a CentOS 7.4 instance in one of my CloudStack 4.8
clusters.  Unfortunately, cloud-init fails with the following in snippet
in /var/log/cloud-init.log:

2017-09-13 18:53:00,118 - __init__.py[DEBUG]: Seeing if we can get any data 
from 
2017-09-13 18:53:00,118 - DataSourceCloudStack.py[DEBUG]: Using 
/var/lib/dhclient lease directory
2017-09-13 18:53:00,118 - DataSourceCloudStack.py[DEBUG]: No lease file found, 
using default gateway

Where it then tries to use the default route to download userdata.  The
problem is that we're not using the Cloudstack VR as a default router,
so I expected it to parse /var/lib/dhclient/dhclient--eth0.lease for the
"dhcp-server-identifier" line.

Theory as to cause:
I believe that this change 
(https://github.com/cloud-init/cloud-init/commit/aee0edd93cb4d78b5e0d1aec71e977aabf31cdd0#diff-5bc9de2bb7889d66205845400c7cf99b)
 breaks cloud-init beyond the 7.3-distributed cloud-0.7.5 when 7.4 includes 
0.7.9-9.

Fix:

Changing it from "dhclient." to "dhclient-" in /usr/lib/python2.7/site-
packages/cloudinit/sources/DataSourceCloudStack.py on the running box
with an installed RPM did the trick theoretically (after removing the
pyc and pyo files, of course).

This *can* be patched around by RedHat/CentOS (and hopefully will), but
I figure it might be better to take it straight upstream.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1717147

Title:
  cloud-init 0.7.9 fails for CentOS 7.4 in Cloudstack

Status in cloud-init:
  New

Bug description:
  Environment:
  CentOS 7.4, cloud-init-0.7.9-9.el7.centos.2.x86_64

  Problem (quick):
  CentOS 7.4 builds on Cloudstack 4.8 don't run cloud-init because the newer 
version of cloud-init doesn't appear to like the way the dhclient lease file is 
named.

  Problem (long):

  I've just built a CentOS 7.4 instance in one of my CloudStack 4.8
  clusters.  Unfortunately, cloud-init fails with the following in
  snippet in /var/log/cloud-init.log:

  2017-09-13 18:53:00,118 - __init__.py[DEBUG]: Seeing if we can get any data 
from 
  2017-09-13 18:53:00,118 - DataSourceCloudStack.py[DEBUG]: Using 
/var/lib/dhclient lease directory
  2017-09-13 18:53:00,118 - DataSourceCloudStack.py[DEBUG]: No lease file 
found, using default gateway

  Where it then tries to use the default route to download userdata.
  The problem is that we're not using the Cloudstack VR as a default
  router, so I expected it to parse /var/lib/dhclient/dhclient--
  eth0.lease for the "dhcp-server-identifier" line.

  Theory as to cause:
  I believe that this change 
(https://github.com/cloud-init/cloud-init/commit/aee0edd93cb4d78b5e0d1aec71e977aabf31cdd0#diff-5bc9de2bb7889d66205845400c7cf99b)
 breaks cloud-init beyond the 7.3-distributed cloud-0.7.5 when 7.4 includes 
0.7.9-9.

  Fix:

  Changing it from "dhclient." to "dhclient-" in /usr/lib/python2.7
  /site-packages/cloudinit/sources/DataSourceCloudStack.py on the
  running box with an installed RPM did the trick theoretically (after
  removing the pyc and pyo files, of course).

  This *can* be patched around by RedHat/CentOS (and hopefully will),
  but I figure it might be better to take it straight upstream.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1717147/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717121] [NEW] Security group Inbound rule allows ip addresses with /0 option.

2017-09-13 Thread Nilesh
Public bug reported:

Under security groups, when we try to add a new inbound rule using CIDR
it doesn`t validate the input.

example.

0.0.0.0/0 is a rule that will open inbound access to internet. but at
the same time if there is a valid ip e.g. 172.155.0.0/0 then "0" bit
match should not be allowed.


This UI validation is the part of AWS. so even if someone by mistake types the 
/0 with valid ip address it will make the rule to open the inbound to entire 
internet.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1717121

Title:
  Security group Inbound rule allows ip addresses with /0 option.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Under security groups, when we try to add a new inbound rule using
  CIDR it doesn`t validate the input.

  example.

  0.0.0.0/0 is a rule that will open inbound access to internet. but at
  the same time if there is a valid ip e.g. 172.155.0.0/0 then "0" bit
  match should not be allowed.

  
  This UI validation is the part of AWS. so even if someone by mistake types 
the /0 with valid ip address it will make the rule to open the inbound to 
entire internet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1717121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688372] Re: the user guide should talk about os-client-config and clouds.yaml

2017-09-13 Thread Gary W. Smith
** Changed in: horizon
   Importance: Undecided => Wishlist

** Changed in: horizon
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1688372

Title:
  the user guide should talk about os-client-config and clouds.yaml

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in openstack-manuals:
  Won't Fix

Bug description:
  
  - [ ] This doc is inaccurate in this way: __
  - [x] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  The user guide talks about using environment variables and RC files.
  It should also talk about os-client-config and clouds.yaml, especially
  for the unified client. See http://docs.openstack.org/developer/os-
  client-config/ for details.

  
  ---
  Release: 15.0.0 on 2017-05-04 10:31
  SHA: 980b7fe8c553955d4bb833de4d9a0c70eb5038b7
  Source: 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/user-guide/source/common/cli-set-environment-variables-using-openstack-rc.rst
  URL: 
https://docs.openstack.org/user-guide/common/cli-set-environment-variables-using-openstack-rc.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1688372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716718] Re: chown commands failing (no rootwrap filter)

2017-09-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/503079
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=39c2cceb75265ddf67822ca40d2d69d2e27e3a91
Submitter: Jenkins
Branch:master

commit 39c2cceb75265ddf67822ca40d2d69d2e27e3a91
Author: Michael Still 
Date:   Wed Sep 13 03:07:36 2017 +1000

Fix missed chown call

When privsep'ing chown calls, this one was missed. Fix that.

I think this entire method should go away, but it will break at least
one of out tree driver. I'm talking to the powervm guys about a way
forward there.

Change-Id: I8a9bda36728896e60b13c32afda0a7130664cb7b
Closes-Bug: #1716718


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1716718

Title:
  chown commands failing (no rootwrap filter)

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  
  https://review.openstack.org/#/c/471972/31/etc/nova/rootwrap.d/compute.filters

  The above changed removed the chown rootwrap filter. However the
  temporary_chown method in nova.utils is still calling
  execute('chown',...) which is failing. This needs to be converted to
  use the new nova.privsep.dac_admin chown method.

  Environment
  ===
  Openstack version: master

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1716718/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717046] [NEW] L3HARouterVRIdAllocationDbObjectTestCase.test_delete_objects fails because of duplicate record

2017-09-13 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/94/503794/1/check/gate-neutron-
python35/27f478d/testr_results.html.gz

Traceback (most recent call last):
  File "/home/jenkins/workspace/gate-neutron-python35/neutron/objects/base.py", 
line 633, in create
self.modify_fields_to_db(fields))
  File 
"/home/jenkins/workspace/gate-neutron-python35/neutron/objects/db/api.py", line 
61, in create_object
context.session.add(db_obj)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 567, in __exit__
self.rollback()
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/util/langhelpers.py",
 line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/util/compat.py",
 line 187, in reraise
raise value
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 564, in __exit__
self.commit()
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 461, in commit
self._prepare_impl()
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 441, in _prepare_impl
self.session.flush()
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 2177, in flush
self._flush(objects)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 2297, in _flush
transaction.rollback(_capture_exception=True)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/util/langhelpers.py",
 line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/util/compat.py",
 line 187, in reraise
raise value
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/session.py",
 line 2261, in _flush
flush_context.execute()
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/unitofwork.py",
 line 389, in execute
rec.execute(self)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/unitofwork.py",
 line 548, in execute
uow
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/persistence.py",
 line 181, in save_obj
mapper, table, insert)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/orm/persistence.py",
 line 799, in _emit_insert_statements
execute(statement, multiparams)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/engine/base.py",
 line 945, in execute
return meth(self, multiparams, params)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/sql/elements.py",
 line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/engine/base.py",
 line 1053, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/engine/base.py",
 line 1189, in _execute_context
context)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/engine/base.py",
 line 1398, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/util/compat.py",
 line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/util/compat.py",
 line 186, in reraise
raise value.with_traceback(tb)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/engine/base.py",
 line 1182, in _execute_context
context)
  File 
"/home/jenkins/workspace/gate-neutron-python35/.tox/py35/lib/python3.5/site-packages/sqlalchemy/engine/default.py",
 line 470, in do_execute
cursor.execute(statement, parameters)
oslo_db.exception.DBDuplicateEntry: (sqlite3.IntegrityError) UNIQUE constraint 
failed: ha_router_vrid_allocations.network_id, ha_router_vrid_allocations.vr_id 
[SQL: 'INSERT INTO ha_router_vrid_allocations (network_id, vr_id

[Yahoo-eng-team] [Bug 1716746] Re: functional job broken by new os-testr

2017-09-13 Thread Thomas Morin
** Also affects: bgpvpn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716746

Title:
  functional job broken by new os-testr

Status in networking-bgpvpn:
  New
Status in BaGPipe:
  New
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released

Bug description:
  functional job fails with:

  2017-09-12 16:09:20.705975 | 2017-09-12 16:09:20.705 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L67:   
testr_exit_code=0
  2017-09-12 16:09:20.707372 | 2017-09-12 16:09:20.706 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L68:   set 
-e
  2017-09-12 16:09:20.718005 | 2017-09-12 16:09:20.717 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L71:   
generate_testr_results
  2017-09-12 16:09:20.719619 | 2017-09-12 16:09:20.719 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L12:
   sudo -H -u stack chmod o+rw .
  2017-09-12 16:09:20.720974 | 2017-09-12 16:09:20.720 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L13:
   sudo -H -u stack chmod o+rw -R .testrepository
  2017-09-12 16:09:20.722284 | 2017-09-12 16:09:20.721 | chmod: cannot access 
'.testrepository': No such file or directory

  This is because new os-testr switched to stestr that has a different
  name for the directory (.stestr).

To manage notifications about this bug go to:
https://bugs.launchpad.net/bgpvpn/+bug/1716746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716746] Re: functional job broken by new os-testr

2017-09-13 Thread Thomas Morin
** Also affects: networking-bagpipe
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716746

Title:
  functional job broken by new os-testr

Status in BaGPipe:
  New
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released

Bug description:
  functional job fails with:

  2017-09-12 16:09:20.705975 | 2017-09-12 16:09:20.705 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L67:   
testr_exit_code=0
  2017-09-12 16:09:20.707372 | 2017-09-12 16:09:20.706 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L68:   set 
-e
  2017-09-12 16:09:20.718005 | 2017-09-12 16:09:20.717 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L71:   
generate_testr_results
  2017-09-12 16:09:20.719619 | 2017-09-12 16:09:20.719 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L12:
   sudo -H -u stack chmod o+rw .
  2017-09-12 16:09:20.720974 | 2017-09-12 16:09:20.720 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L13:
   sudo -H -u stack chmod o+rw -R .testrepository
  2017-09-12 16:09:20.722284 | 2017-09-12 16:09:20.721 | chmod: cannot access 
'.testrepository': No such file or directory

  This is because new os-testr switched to stestr that has a different
  name for the directory (.stestr).

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-bagpipe/+bug/1716746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717000] Re: InstanceNotFound prevents putting over-quota instance into ERROR state

2017-09-13 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1717000

Title:
  InstanceNotFound prevents putting over-quota instance into ERROR state

Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  I found this when trying to recreate bug 1716706.

  https://bugs.launchpad.net/nova/+bug/1716706/comments/4

  Basically I can get conductor to fail the quota recheck and go to set
  the instance into ERROR state but it fails to find the instance since
  we don't have the cell context targeted:

  Sep 13 17:58:26 devstack-queens nova-conductor[3129]: WARNING 
nova.scheduler.utils [None req-90a115b2-5838-4be2-afe2-a3b755015e19 demo demo] 
[instance: 888925b0-164a-4d4a-bb6c-c0426f904e95] Setting instance to ERROR 
state.: TooManyInstances: Quota exceeded for instances: Requested 1, but 
already used 10 of 10 instances
  Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR root [None 
req-90a115b2-5838-4be2-afe2-a3b755015e19 demo demo] Original exception being 
dropped: ['Traceback (most recent call last):\n', ' File 
"/opt/stack/nova/nova/conductor/manager.py", line 1003, in 
schedule_and_build_instances\n orig_num_req=len(build_requests))\n', ' File 
"/opt/stack/nova/nova/compute/utils.py", line 764, in 
check_num_instances_quota\n allowed=total_alloweds)\n', 'TooManyInstances: 
Quota exceeded for instances: Requested 1, but already used 10 of 10 
instances\n']: InstanceNotFound: Instance 888925b0-164a-4d4a-bb6c-c0426f904e95 
could not be found.
  Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR 
oslo_messaging.rpc.server [None req-90a115b2-5838-4be2-afe2-a3b755015e19 demo 
demo] Exception during message handling: InstanceNotFound: Instance 
888925b0-164a-4d4a-bb6c-c0426f904e95 could not be found.
  Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR 
oslo_messaging.rpc.server InstanceNotFound: Instance 
888925b0-164a-4d4a-bb6c-c0426f904e95 could not be found.

  Because we don't target the cell when updating the instance.

  
https://github.com/openstack/nova/blob/cfdec41eeec5fab220702efefdaafc45559aeb14/nova/conductor/manager.py#L1168

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1717000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717000] [NEW] InstanceNotFound prevents putting over-quota instance into ERROR state

2017-09-13 Thread Matt Riedemann
Public bug reported:

I found this when trying to recreate bug 1716706.

https://bugs.launchpad.net/nova/+bug/1716706/comments/4

Basically I can get conductor to fail the quota recheck and go to set
the instance into ERROR state but it fails to find the instance since we
don't have the cell context targeted:

Sep 13 17:58:26 devstack-queens nova-conductor[3129]: WARNING 
nova.scheduler.utils [None req-90a115b2-5838-4be2-afe2-a3b755015e19 demo demo] 
[instance: 888925b0-164a-4d4a-bb6c-c0426f904e95] Setting instance to ERROR 
state.: TooManyInstances: Quota exceeded for instances: Requested 1, but 
already used 10 of 10 instances
Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR root [None 
req-90a115b2-5838-4be2-afe2-a3b755015e19 demo demo] Original exception being 
dropped: ['Traceback (most recent call last):\n', ' File 
"/opt/stack/nova/nova/conductor/manager.py", line 1003, in 
schedule_and_build_instances\n orig_num_req=len(build_requests))\n', ' File 
"/opt/stack/nova/nova/compute/utils.py", line 764, in 
check_num_instances_quota\n allowed=total_alloweds)\n', 'TooManyInstances: 
Quota exceeded for instances: Requested 1, but already used 10 of 10 
instances\n']: InstanceNotFound: Instance 888925b0-164a-4d4a-bb6c-c0426f904e95 
could not be found.
Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR 
oslo_messaging.rpc.server [None req-90a115b2-5838-4be2-afe2-a3b755015e19 demo 
demo] Exception during message handling: InstanceNotFound: Instance 
888925b0-164a-4d4a-bb6c-c0426f904e95 could not be found.
Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR 
oslo_messaging.rpc.server InstanceNotFound: Instance 
888925b0-164a-4d4a-bb6c-c0426f904e95 could not be found.

Because we don't target the cell when updating the instance.

https://github.com/openstack/nova/blob/cfdec41eeec5fab220702efefdaafc45559aeb14/nova/conductor/manager.py#L1168

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: nova/pike
 Importance: Undecided
 Status: New


** Tags: quotas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1717000

Title:
  InstanceNotFound prevents putting over-quota instance into ERROR state

Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) pike series:
  New

Bug description:
  I found this when trying to recreate bug 1716706.

  https://bugs.launchpad.net/nova/+bug/1716706/comments/4

  Basically I can get conductor to fail the quota recheck and go to set
  the instance into ERROR state but it fails to find the instance since
  we don't have the cell context targeted:

  Sep 13 17:58:26 devstack-queens nova-conductor[3129]: WARNING 
nova.scheduler.utils [None req-90a115b2-5838-4be2-afe2-a3b755015e19 demo demo] 
[instance: 888925b0-164a-4d4a-bb6c-c0426f904e95] Setting instance to ERROR 
state.: TooManyInstances: Quota exceeded for instances: Requested 1, but 
already used 10 of 10 instances
  Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR root [None 
req-90a115b2-5838-4be2-afe2-a3b755015e19 demo demo] Original exception being 
dropped: ['Traceback (most recent call last):\n', ' File 
"/opt/stack/nova/nova/conductor/manager.py", line 1003, in 
schedule_and_build_instances\n orig_num_req=len(build_requests))\n', ' File 
"/opt/stack/nova/nova/compute/utils.py", line 764, in 
check_num_instances_quota\n allowed=total_alloweds)\n', 'TooManyInstances: 
Quota exceeded for instances: Requested 1, but already used 10 of 10 
instances\n']: InstanceNotFound: Instance 888925b0-164a-4d4a-bb6c-c0426f904e95 
could not be found.
  Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR 
oslo_messaging.rpc.server [None req-90a115b2-5838-4be2-afe2-a3b755015e19 demo 
demo] Exception during message handling: InstanceNotFound: Instance 
888925b0-164a-4d4a-bb6c-c0426f904e95 could not be found.
  Sep 13 17:58:26 devstack-queens nova-conductor[3129]: ERROR 
oslo_messaging.rpc.server InstanceNotFound: Instance 
888925b0-164a-4d4a-bb6c-c0426f904e95 could not be found.

  Because we don't target the cell when updating the instance.

  
https://github.com/openstack/nova/blob/cfdec41eeec5fab220702efefdaafc45559aeb14/nova/conductor/manager.py#L1168

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1717000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715995] Re: Flavors in nova - wrong notes

2017-09-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/502112
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=a9f11002cc1b668b612a4ac7ce3d9662efbd2fd5
Submitter: Jenkins
Branch:master

commit a9f11002cc1b668b612a4ac7ce3d9662efbd2fd5
Author: Matt Riedemann 
Date:   Fri Sep 8 11:09:40 2017 -0400

doc: fix flavor notes

This fixes the two points in the note at the top of the
flavors page:

1. The policy rule in the first bullet was old so it's updated.

2. As of Ifa4e9cdfbbac1a1d4bf28679b24a17b13f637ddd in Pike,
   Horizon no longer allows you to 'edit' a flavor by default.
   Rather than try to explain historical bad behavior and the
   new default behavior in the Dashboard, the second bullet is
   simply removed.

Closes-Bug: #1715995

Change-Id: I372bf1e159d1db32461f843bd94c453d2e7df8d2


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715995

Title:
  Flavors in nova - wrong notes

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Triaged

Bug description:
  - [x] This doc is inaccurate in this way:

  The bullet points in the note at the top of the page are wrong:

  * Configuration rights can be delegated to additional users by
  redefining the access controls for compute_extension:flavormanage in
  /etc/nova/policy.json on the nova-api server.

  * The Dashboard simulates the ability to modify a flavor by deleting
  an existing flavor and creating a new one with the same name.

  The policy rule there is old for the legacy v2 API and the code for
  that is gone, and the part about the Dashboard changed in Pike:
  https://review.openstack.org/#/c/491442/

  
  ---
  Release: 16.0.0.0rc2.dev298 on 2017-09-08 02:24
  SHA: fbe6f77bc1cb41f5d6cfc24ece54d3413f997aab
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/flavors.rst
  URL: https://docs.openstack.org/nova/latest/admin/flavors.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715995/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713180] Re: need documentation for configuring image import

2017-09-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/498138
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=d304d2c05c521e3263f675794a11800630358ebf
Submitter: Jenkins
Branch:master

commit d304d2c05c521e3263f675794a11800630358ebf
Author: Brian Rosmaita 
Date:   Fri Aug 25 22:42:53 2017 -0400

Add image import docs to admin guide

Change-Id: I0222b5d6d5685029eadf04a3d19160ba018fe2e5
Closes-bug: #1713180


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1713180

Title:
  need documentation for configuring image import

Status in Glance:
  Fix Released

Bug description:
  Need to add a brief section to the Admin guide on configuring the
  interoperable image import introduced in Pike.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1713180/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716963] [NEW] ovs dependencies fail to install on devstack plugin with Fedora

2017-09-13 Thread Daniel Mellado
Public bug reported:

Devstack fails when trying to install ovs dependencies with F26. It 

   
tries to install a non-existant kernel-foo package as the is_fedora flag

  
on devstack doesn't differenciate on Fedora/CentOS/... and currently

  
this was failing with Fedora 2 due to - _ mismatch in package name.

** Affects: neutron
 Importance: Undecided
 Assignee: Daniel Mellado (daniel-mellado)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Daniel Mellado (daniel-mellado)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716963

Title:
  ovs dependencies fail to install on devstack plugin with Fedora

Status in neutron:
  In Progress

Bug description:
  Devstack fails when trying to install ovs dependencies with F26. It   

 
  tries to install a non-existant kernel-foo package as the is_fedora flag  


  on devstack doesn't differenciate on Fedora/CentOS/... and currently  


  this was failing with Fedora 2 due to - _ mismatch in package name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1716963/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716896] Re: Shelve offload instances are stoill counted to cpu and ram usage summary

2017-09-13 Thread Sylvain Bauza
You can find some documentation explaining how shelve operation work and
the difference between offloaded instances and just shelved instances in
https://developer.openstack.org/api-guide/compute/server_concepts.html
#server-actions


** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1716896

Title:
  Shelve offload instances are stoill counted to cpu and ram usage
  summary

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hello
  My customer wants to free up cloud resources as they are not used all the 
time. So I told him to do shelve instances and the resources will not being 
counted. Unfortunately I see that resources usage is still counted on Horizon 
project review and using cli openstack usage show. I think that shelved 
instances shouldn't be counted as they are not using any compute resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1716896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716945] [NEW] Install and configure (Red Hat) in glance: missing DB steps

2017-09-13 Thread Michael Burk
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [x] This doc is inaccurate in this way: Under Prerequisites, the database 
setup shows to connect to the db, and then it skips to the CLI to create the 
glance user. Compare to Ocata doc:
https://docs.openstack.org/ocata/install-guide-rdo/glance-install.html#install-and-configure-components
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 15.0.0.0rc2.dev25 on 'Wed Aug 23 03:33:04 2017, commit 9820166'
SHA: 982016670fe908e5d7026714b115e63b7c31b46b
Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-rdo.rst
URL: https://docs.openstack.org/glance/pike/install/install-rdo.html

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1716945

Title:
  Install and configure (Red Hat) in glance: missing DB steps

Status in Glance:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: Under Prerequisites, the database 
setup shows to connect to the db, and then it skips to the CLI to create the 
glance user. Compare to Ocata doc:
  
https://docs.openstack.org/ocata/install-guide-rdo/glance-install.html#install-and-configure-components
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 15.0.0.0rc2.dev25 on 'Wed Aug 23 03:33:04 2017, commit 9820166'
  SHA: 982016670fe908e5d7026714b115e63b7c31b46b
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-rdo.rst
  URL: https://docs.openstack.org/glance/pike/install/install-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1716945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716937] [NEW] Reopen: Boot Source in Launch Instance panel is nondeterministic

2017-09-13 Thread Mateusz Kowalski
Public bug reported:

This is to reopen #1640493
(https://bugs.launchpad.net/horizon/+bug/1640493)

---
When using new Launch Instance panel, default Boot Source should be always set 
to "Image" as a default value. Currently it's nondeterministic and selects 
either "Volume Snapshot" or "Image" on a random basis.
---

Patch for the original bug makes **the order** of fields deterministic,
but not the initial choice.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  This is to reopen #1640493
+ (https://bugs.launchpad.net/horizon/+bug/1640493)
  
  ---
  When using new Launch Instance panel, default Boot Source should be always 
set to "Image" as a default value. Currently it's nondeterministic and selects 
either "Volume Snapshot" or "Image" on a random basis.
  ---
  
  Patch for the original bug makes **the order** of fields deterministic,
  but not the initial choice.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1716937

Title:
  Reopen: Boot Source in Launch Instance panel is nondeterministic

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This is to reopen #1640493
  (https://bugs.launchpad.net/horizon/+bug/1640493)

  ---
  When using new Launch Instance panel, default Boot Source should be always 
set to "Image" as a default value. Currently it's nondeterministic and selects 
either "Volume Snapshot" or "Image" on a random basis.
  ---

  Patch for the original bug makes **the order** of fields
  deterministic, but not the initial choice.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1716937/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716920] [NEW] online snapshot deletion breaks volume info and backing chain (with remotefs drivers?)

2017-09-13 Thread Silvan Kaiser
Public bug reported:

The deletion of online snapshots of remotefs based volumes breaks the
.info file/backing chain of these volumes. Logs can be seen in any
current Quobyte CI run in Cinder/Nova/OS-Brick . Afaics the the other
driver using this (VZstorage) has it's CI skip the affected tests (e.g.
test_snapshot_create_delete_with_volume_in_use).

I ran a lot of tests and so far i can say that the first deletion of a
member in the backing chain works (snapshot is deleted) but seemingly
leaves the .info files content and/or the backing chain of the volume
file in a broken state. The error can be identified e.g. by the
following log pattern:

This is the first snapshot deletion that runs successfully (the
snapshots id is 91755e5f-e573-4ddb-84af-3712d69a). The ID of the
snapshot and its snapshot_file name do match their uuids:

2017-09-13 08:28:59.436 20467 DEBUG cinder.volume.drivers.remotefs 
[req-eda7ddf5-217d-490d-a8d4-1813df68d8db 
tempest-VolumesSnapshotTestJSON-708947401 -] Deleting online snapshot 
91755e5f-e573-4ddb-84af-3712d69a
fc89 of volume 94598844-418c-4b5d-b034-5330e24e7421 _delete_snapshot 
/opt/stack/cinder/cinder/volume/drivers/remotefs.py:1099
2017-09-13 08:28:59.487 20467 DEBUG cinder.volume.drivers.remotefs 
[req-eda7ddf5-217d-490d-a8d4-1813df68d8db 
tempest-VolumesSnapshotTestJSON-708947401 -] snapshot_file for this snap is: 
volume-94598844-418c-4b5d-b034-5330e24e7421.91755e5f-e573-4ddb-84af-3712d69afc89
 _delete_snapshot /opt/stack/cinder/cinder/volume/drivers/remotefs.py:1124

The next snapshot to be deleted (138a1f62-7582-4aaa-9d72-9eada34b) shows
that a wrong snapshot_file is read from the volumes .info file. In fact
it shows the file of the previous snapshot :

2017-09-13 08:29:01.857 20467 DEBUG cinder.volume.drivers.remotefs 
[req-6ad4add9-34b8-41b9-a1f0-7dc2d6bb1862 
tempest-VolumesSnapshotTestJSON-708947401 -] Deleting online snapshot 
138a1f62-7582-4aaa-9d72-9eada34b
eeaf of volume 94598844-418c-4b5d-b034-5330e24e7421 _delete_snapshot 
/opt/stack/cinder/cinder/volume/drivers/remotefs.py:1099
2017-09-13 08:29:01.872 20467 DEBUG cinder.volume.drivers.remotefs 
[req-6ad4add9-34b8-41b9-a1f0-7dc2d6bb1862 
tempest-VolumesSnapshotTestJSON-708947401 -] snapshot_file for this snap is: 
volume-94598844-418c-4b5d-b034-5330e24e7421.91755e5f-e573-4ddb-84af-3712d69afc89
 _delete_snapshot /opt/stack/cinder/cinder/volume/drivers/remotefs.py:1124

Now this second snapshot deletion fails because the snapshot file for
138a1f62-7582-4aaa-9d72-9eada34b no longer exists:

2017-09-13 08:29:02.674 20467 ERROR oslo_messaging.rpc.server 
ProcessExecutionError: Unexpected error while running command.
2017-09-13 08:29:02.674 20467 ERROR oslo_messaging.rpc.server Command: 
/usr/bin/python -m oslo_concurrency.prlimit --as=1073741824 --cpu=8 -- env 
LC_ALL=C qemu-img info 
/opt/stack/data/cinder/mnt/a1e3635ffba9fce1b854369f1a255d7b/volume-94598844-418c-4b5d-b034-5330e24e7421.138a1f62-7582-4aaa-9d72-9eada34beeaf
2017-09-13 08:29:02.674 20467 ERROR oslo_messaging.rpc.server Exit code: 1
2017-09-13 08:29:02.674 20467 ERROR oslo_messaging.rpc.server Stdout: u''
2017-09-13 08:29:02.674 20467 ERROR oslo_messaging.rpc.server Stderr: 
u"qemu-img: Could not open 
'/opt/stack/data/cinder/mnt/a1e3635ffba9fce1b854369f1a255d7b/volume-94598844-418c-4b5d-b034-5330e24e7421.138a1f62-7582-4aaa-9d72-9eada34beeaf':
 Could not open 
'/opt/stack/data/cinder/mnt/a1e3635ffba9fce1b854369f1a255d7b/volume-94598844-418c-4b5d-b034-5330e24e7421.138a1f62-7582-4aaa-9d72-9eada34beeaf':
 No such file or directory\n"

The referenced tempest test fails 100% of the time in our CIs. I
manually tested the scenario and found the same results. Furthermore i
was able, by creating three consecutive snapshots from a single volume
and deleting them one after the other, to create a snapshot file with a
broken backing file link. In the end i was left with a volume file and
an overlay file referencing a removed backing file (previous snapshot of
the same volume).

I was able to run the scenario without issues when using offline
snapshots. Thus this seems to be related to the usage of the online
snapshot deletion via the Nova API.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: quobyte remotefs snapshot

** Tags added: quobyte remotefs

** Also affects: nova
   Importance: Undecided
   Status: New

** Description changed:

  The deletion of online snapshots of remotefs based volumes breaks the
  .info file/backing chain of these volumes. Logs can be seen in any
  current Quobyte CI run in Cinder/Nova/OS-Brick . Afaics the the other
  driver using this (VZstorage) has it's CI skip the affected tests (e.g.
  test_snapshot_create_delete_with_volume_in_use).
  
  I ran a lot of tests and so far i can say that the first deletion of a
  member in the backing chain works (snapshot is deleted) but seemingly
  leaves the .info files content and/or the backing cha

[Yahoo-eng-team] [Bug 1716913] [NEW] bandwidth metering - Creating meter label rule doesn't match the metering concept.

2017-09-13 Thread leegayeon
Public bug reported:

In the following bug report, "remote_ip_prefix" is considered to be "source 
address/cidr" for ingress traffic, but this is not suitable for metering 
concepts.
https://bugs.launchpad.net/neutron/+bug/1528137


┌┐  ┌┐┌┐
│external│──│router02│───│   VMs  │
└┘  100.100.20.0/24 └┘ 10.0.1.0/24  └┘
│
│   ┌┐
└─│   VMs  │
  20.0.1.0/24   └┘


In case of ingress traffic(inbound), source should be 0.0.0.0/0 and destination 
should be address/cidr of VMs . 
That way, it is possible to meter bandwidth per address/cidr of VMs. 


This is my test case.

1. Create Label
# neutron meter-label-create --tenant-id $TEANAT_ID --description "leegy" 
meter_ingress
Created a new metering_label:
+-+--+
| Field   | Value|
+-+--+
| description | leegy|
| id  | b1c41f6f-3504-441d-aaa6-d655ca76bc08 |
| name| meter_ingress|
| project_id  | e8c282b3d5e94776a655314e7ab86985 |
| shared  | False|
| tenant_id   | e8c282b3d5e94776a655314e7ab86985 |
+-+--+

2. Create rule 
ingress rule(traffic from qg- interface to gr- interface), remote_ip_prefix is 
network cidr of VMs.

# neutron meter-label-rule-create --tenant-id $TENANT_ID --direction ingress 
$LABEL_ID 10.0.1.0/24
Created a new metering_label_rule:
+---+--+
| Field | Value|
+---+--+
| direction | ingress  |
| excluded  | False|
| id| f9829983-fe3b-4848-8983-e3667dfe64df |
| metering_label_id | b1c41f6f-3504-441d-aaa6-d655ca76bc08 |
| remote_ip_prefix  | 10.0.1.0/24  |
+---+--+

3. Check iptables rules
I want to meter bandwidth from external to VMs.

[expected rules] 
Chain neutron-meter-r-b1c41f6f-350 (1 references)
 pkts bytes target   prot opt inout  
sourcedestination
0 0 neutron-meter-l-b1c41f6f-350  all  --qg-3f62cc89-83 *   
0.0.0.0/0  10.0.1.0/24

[but result is...]
Chain neutron-meter-r-b1c41f6f-350 (1 references)
 pkts bytes targetprot optinout  
source  destination
0 0 neutron-meter-l-b1c41f6f-350  all  --qg-3f62cc89-83  *  
10.0.1.0/24  0.0.0.0/0


4. Modify neutron source 
neutron/services/metering/drivers/iptables/iptables_driver.py 

def _prepare_rule(self, ext_dev, rule, label_chain):
remote_ip = rule['remote_ip_prefix']
if rule['direction'] == 'egress':
#dir_opt = '-d %s -o %s' % (remote_ip, ext_dev)
dir_opt = '-s %s -o %s' % (remote_ip, ext_dev)
else:
#dir_opt = '-s %s -i %s' % (remote_ip, ext_dev)
dir_opt = '-d %s -i %s' % (remote_ip, ext_dev)

if rule['excluded']:
ipt_rule = '%s -j RETURN' % dir_opt
else:
ipt_rule = '%s -j %s' % (dir_opt, label_chain)
return ipt_rule
 

5. Check iptables rules 
possble to meter the bandwidth from external to VMs.

Chain neutron-meter-r-b1c41f6f-350 (1 references)
 pkts bytes targetprot optin out   
source destination
0 0 neutron-meter-l-b1c41f6f-350  all  -- qg-3f62cc89-83  *  
0.0.0.0/010.0.1.0/24


6. ping test
ping from qdhcp-namespace of VM network to another router gateway ip

# neutron net-list
+--+---++
| id   | name  | subnets
|
+--+---++
| 19bd6565-07a1-4df3--420cb5d01e0a | network02 | 
c00c950e-e4ac-4d79-915c-535114a4e401   |
|  |   | 10.0.1.0/24
|
| dca679c6-e294-49ef-addd-30fd6d6d0c53 | public2   | 
47458829-cc7b-498d-8dd6-2a97c797cc61   |
|  |   | 100.100.20.0/24
|
+--+---++

# neutron router-list
++++-+---+
| id | name   | external_gateway_info  | 
distri

[Yahoo-eng-team] [Bug 1716903] [NEW] Failed to live-migrate instance in cell1.

2017-09-13 Thread Yikun Jiang
Public bug reported:


Step 1 create instance in cell1
+--++---++-+-+
| ID   | Name   | Status| Task State | 
Power State | Networks|
+--++---++-+-+
| 84038890-8d70-45e1-8240-2303f4227e11 | yikun1 | MIGRATING | migrating  | 
Running | public=2001:db8::a, 172.24.4.13 |
+--++---++-+-+

Step 2 live migrate instance
nova live-migration 84038890-8d70-45e1-8240-2303f4227e11

Step 3
The instance will stuck in "MIGRATIING" state.
+--++---++-+-+
| ID   | Name   | Status| Task State | 
Power State | Networks|
+--++---++-+-+
| 84038890-8d70-45e1-8240-2303f4227e11 | yikun1 | MIGRATING | migrating  | 
Running | public=2001:db8::a, 172.24.4.13 |
+--++---++-+-+

It seems we need add @targets_cell decorator for **live_migrate_instance** 
methods in conductor:
https://github.com/openstack/nova/blob/master/nova/conductor/manager.py#L378


ERROR LOG:
Exception during message handling: InstanceActionNotFound: Action for 
request_id req-5aa03558-ae14-458e-9c35-c3d377c7ce45 on instance 
84038890-8d70-45e1-8240-2303f4227e11 not found
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 160, in _process_incoming
res = self.dispatcher.dispatch(message)
  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
213, in dispatch
return self._do_dispatch(endpoint, method, ctxt, args)
  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _do_dispatch
result = func(ctxt, **new_args)
  File "/opt/stack/nova/nova/compute/utils.py", line 875, in decorated_function
with EventReporter(context, event_name, instance_uuid):
  File "/opt/stack/nova/nova/compute/utils.py", line 846, in __enter__
self.context, uuid, self.event_name, want_result=False)
  File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", 
line 184, in wrapper
result = fn(cls, context, *args, **kwargs)
  File "/opt/stack/nova/nova/objects/instance_action.py", line 169, in 
event_start
db_event = db.action_event_start(context, values)
  File "/opt/stack/nova/nova/db/api.py", line 1957, in action_event_start
return IMPL.action_event_start(context, values)
  File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 250, in wrapped
return f(context, *args, **kwargs)
  File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 6155, in 
action_event_start
instance_uuid=values['instance_uuid'])
InstanceActionNotFound: Action for request_id 
req-5aa03558-ae14-458e-9c35-c3d377c7ce45 on instance 
84038890-8d70-45e1-8240-2303f4227e11 not found

** Affects: nova
 Importance: Undecided
 Assignee: Yikun Jiang (yikunkero)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1716903

Title:
  Failed to live-migrate instance in cell1.

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  
  Step 1 create instance in cell1
  
+--++---++-+-+
  | ID   | Name   | Status| Task State | 
Power State | Networks|
  
+--++---++-+-+
  | 84038890-8d70-45e1-8240-2303f4227e11 | yikun1 | MIGRATING | migrating  | 
Running | public=2001:db8::a, 172.24.4.13 |
  
+--++---++-+-+

  Step 2 live migrate instance
  nova live-migration 84038890-8d70-45e1-8240-2303f4227e11

  Step 3
  The instance will stuck in "MIGRATIING" state.
  
+--++---++-+-+
  | ID   | Name   | Status| Task State | 
Power State | Networks|
  
+--++---++-+-+
  | 84038890-

[Yahoo-eng-team] [Bug 1716899] [NEW] Install and configure in keystone

2017-09-13 Thread Adrian Gherasim
Public bug reported:

The Next link on the bottom of the page at
"https://docs.openstack.org/keystone/pike/install/keystone-install-
ubuntu.html#finalize-the-installation" point to Verify operation and not
to "Create a domain, projects, users, and roles".



This bug tracker is for errors with the documentation, use the following as a 
template and remove or add fields as you see fit. Convert [ ] into [x] to check 
boxes:

- [ ] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01
SHA: 5a9aeefff06678d790d167b6dac752677f02edf9
Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-ubuntu.rst
URL: 
https://docs.openstack.org/keystone/pike/install/keystone-install-ubuntu.html

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1716899

Title:
  Install and configure in keystone

Status in OpenStack Identity (keystone):
  New

Bug description:
  The Next link on the bottom of the page at
  "https://docs.openstack.org/keystone/pike/install/keystone-install-
  ubuntu.html#finalize-the-installation" point to Verify operation and
  not to "Create a domain, projects, users, and roles".



  
  This bug tracker is for errors with the documentation, use the following as a 
template and remove or add fields as you see fit. Convert [ ] into [x] to check 
boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 12.0.0.0rc3.dev2 on 2017-08-26 22:01
  SHA: 5a9aeefff06678d790d167b6dac752677f02edf9
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-install-ubuntu.rst
  URL: 
https://docs.openstack.org/keystone/pike/install/keystone-install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1716899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716896] [NEW] Shelve offload instances are stoill counted to cpu and ram usage summary

2017-09-13 Thread Jacolex
Public bug reported:

Hello
My customer wants to free up cloud resources as they are not used all the time. 
So I told him to do shelve instances and the resources will not being counted. 
Unfortunately I see that resources usage is still counted on Horizon project 
review and using cli openstack usage show. I think that shelved instances 
shouldn't be counted as they are not using any compute resources.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1716896

Title:
  Shelve offload instances are stoill counted to cpu and ram usage
  summary

Status in OpenStack Compute (nova):
  New

Bug description:
  Hello
  My customer wants to free up cloud resources as they are not used all the 
time. So I told him to do shelve instances and the resources will not being 
counted. Unfortunately I see that resources usage is still counted on Horizon 
project review and using cli openstack usage show. I think that shelved 
instances shouldn't be counted as they are not using any compute resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1716896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716879] [NEW] Network graph icons misalignment in IE11 and MSEdge

2017-09-13 Thread Kyrylo Romanenko
Public bug reported:

Steps:
1. Login to Horizon
2. Navigate to Project -> Network -> Network Topology -> Graph

Expected: icons should be centered in graph circles

Actual: icons popping-up out of graph circles

Environment: Ocata

Browsers: IE11 under Windows 7 and Windows Server 2008R2. 
MS Edge under Windows 10.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ie

** Attachment added: "Selection_700.png"
   
https://bugs.launchpad.net/bugs/1716879/+attachment/4949297/+files/Selection_700.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1716879

Title:
  Network graph icons misalignment in IE11 and MSEdge

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps:
  1. Login to Horizon
  2. Navigate to Project -> Network -> Network Topology -> Graph

  Expected: icons should be centered in graph circles

  Actual: icons popping-up out of graph circles

  Environment: Ocata

  Browsers: IE11 under Windows 7 and Windows Server 2008R2. 
  MS Edge under Windows 10.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1716879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716868] [NEW] config file is not read

2017-09-13 Thread do3meli
Public bug reported:

we are running horizon on an Ubuntu 16 cluster with the official repos
for the new pike release and figured out that the configuration file
/etc/openstack-dashboard/local_settings.py is no longer read.

it seems the symlink /usr/share/openstack-
dashboard/openstack_dashboard/local/local_settings.py is no longer
working. we removed that symlink and copied the file directly into
/usr/share/openstack-dashboard/openstack_dashboard/local/ which seems to
help for the moment.

installed package: python-django-horizon - 3:12.0.0-0ubuntu1~cloud0 
OS: Ubuntu 16.04.3 LTS

apache virtual host config: http://paste.openstack.org/show/621008/

file permissions:
-rw-r--r-- 1 root horizon 34573 Sep 13 10:01 
/etc/openstack-dashboard/local_settings.py

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1716868

Title:
  config file is not read

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  we are running horizon on an Ubuntu 16 cluster with the official repos
  for the new pike release and figured out that the configuration file
  /etc/openstack-dashboard/local_settings.py is no longer read.

  it seems the symlink /usr/share/openstack-
  dashboard/openstack_dashboard/local/local_settings.py is no longer
  working. we removed that symlink and copied the file directly into
  /usr/share/openstack-dashboard/openstack_dashboard/local/ which seems
  to help for the moment.

  installed package: python-django-horizon - 3:12.0.0-0ubuntu1~cloud0 
  OS: Ubuntu 16.04.3 LTS

  apache virtual host config: http://paste.openstack.org/show/621008/

  file permissions:
  -rw-r--r-- 1 root horizon 34573 Sep 13 10:01 
/etc/openstack-dashboard/local_settings.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1716868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716746] Re: functional job broken by new os-testr

2017-09-13 Thread Bernard Cafarelli
** Also affects: networking-sfc
   Importance: Undecided
   Status: New

** Changed in: networking-sfc
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716746

Title:
  functional job broken by new os-testr

Status in networking-sfc:
  New
Status in neutron:
  Fix Released

Bug description:
  functional job fails with:

  2017-09-12 16:09:20.705975 | 2017-09-12 16:09:20.705 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L67:   
testr_exit_code=0
  2017-09-12 16:09:20.707372 | 2017-09-12 16:09:20.706 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L68:   set 
-e
  2017-09-12 16:09:20.718005 | 2017-09-12 16:09:20.717 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:main:L71:   
generate_testr_results
  2017-09-12 16:09:20.719619 | 2017-09-12 16:09:20.719 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L12:
   sudo -H -u stack chmod o+rw .
  2017-09-12 16:09:20.720974 | 2017-09-12 16:09:20.720 | + 
/opt/stack/new/neutron/neutron/tests/contrib/post_test_hook.sh:generate_testr_results:L13:
   sudo -H -u stack chmod o+rw -R .testrepository
  2017-09-12 16:09:20.722284 | 2017-09-12 16:09:20.721 | chmod: cannot access 
'.testrepository': No such file or directory

  This is because new os-testr switched to stestr that has a different
  name for the directory (.stestr).

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-sfc/+bug/1716746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp