[Yahoo-eng-team] [Bug 1783556] Re: Neutron ovs agent logs flooded with KeyErrors

2018-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/585742
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=da5b13df2b2a8171f60311414250671820390738
Submitter: Zuul
Branch:master

commit da5b13df2b2a8171f60311414250671820390738
Author: Lucian Petrut 
Date:   Wed Jul 25 16:05:04 2018 +0300

Trivial: avoid KeyError while processing ports

The Neutron OVS agent logs can get flooded with KeyErrors as the
'_get_port_info' method skips the added/removed dict items if no
ports have been added/removed, which are expected to be present,
even if those are just empty sets.

This change ensures that those port info dict fields are always set.

Closes-Bug: #1783556

Change-Id: I9e5325aa2d8525231353ba451e8ea895be51b1ca


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1783556

Title:
  Neutron ovs agent logs flooded with KeyErrors

Status in neutron:
  Fix Released

Bug description:
  The Neutron OVS agent logs can get flooded with KeyErrors as the
  '_get_port_info' method skips the added/removed dict items if no ports
  have been added/removed, which are expected to be present, even if
  those are just empty sets.

  Trace: http://paste.openstack.org/raw/726614/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1783556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786347] Re: Incorrect entry point of metering iptables driver

2018-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/590479
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=adf38349c45c8871a8858d3440a2c6fb4d967583
Submitter: Zuul
Branch:master

commit adf38349c45c8871a8858d3440a2c6fb4d967583
Author: Hongbin Lu 
Date:   Thu Aug 9 18:37:52 2018 +

Fix iptables metering driver entrypoint

Closes-Bug: #1786347
Change-Id: If1c276338cec0c199d8cc8d8f6385025a3bb5d25


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1786347

Title:
  Incorrect entry point of metering iptables driver

Status in neutron:
  Fix Released

Bug description:
  At the entry point 'neutron.services.metering_drivers', the alias
  'iptables' is wrong. It was:

iptables =
  neutron.services.metering.iptables.iptables_driver:IptablesMeteringDriver

  But it should be:

iptables =
  
neutron.services.metering.drivers.iptables.iptables_driver:IptablesMeteringDriver

  This cause the metering agent unable to load iptables driver via
  alias.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1786347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784438] Re: Safer handling inside of OperationLogMIddleware when object is unserializable

2018-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/587185
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=27f619cf9b82d720ec83f72e02a4821606394829
Submitter: Zuul
Branch:master

commit 27f619cf9b82d720ec83f72e02a4821606394829
Author: Eddie Ramirez 
Date:   Mon Jul 30 11:03:02 2018 -0700

Safer handling of return statement inside of OperationLogMiddleware

Enclosing return statement in a try block. If object is unserializable
then return a message.

Change-Id: I184f4b10a419037d3ed770fbec42c262f03a89f2
Closes-bug: #1784438


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1784438

Title:
  Safer handling inside of OperationLogMIddleware when object is
  unserializable

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  function _get_request_param() in
  
https://github.com/openstack/horizon/blob/master/horizon/middleware/operation_log.py#L169-L190
  returns a json dump of params that is not always serializable.

  I'm proposing a better handling of the return statement when the
  object "params" is unserializable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1784438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786546] [NEW] UnicodeError in build_reno in build-openstack-sphinx-docs

2018-08-10 Thread Akihiro Motoki
Public bug reported:

http://logs.openstack.org/63/590163/2/check/build-openstack-sphinx-
docs/82884d2/job-output.txt.gz#_2018-08-09_09_13_44_743163

http://paste.openstack.org/show/727843/

This actually does not trigger a job failure, but it is better to fix
it.

** Affects: horizon
 Importance: Medium
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1786546

Title:
  UnicodeError in build_reno in build-openstack-sphinx-docs

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  http://logs.openstack.org/63/590163/2/check/build-openstack-sphinx-
  docs/82884d2/job-output.txt.gz#_2018-08-09_09_13_44_743163

  http://paste.openstack.org/show/727843/

  This actually does not trigger a job failure, but it is better to fix
  it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1786546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644457] Re: keypair quota error

2018-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/590081
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=fac7d6f2d23afa94983ab3b09d4c101d85a653d0
Submitter: Zuul
Branch:master

commit fac7d6f2d23afa94983ab3b09d4c101d85a653d0
Author: Vishakha Agarwal 
Date:   Thu Aug 9 08:41:36 2018 +0530

Quota details for key_pair "in_use" is 0.

This patch updates the api-ref for 'in_use'
field of quota details for keypair which is
always 0 as it is user dependant parameter.

Closes-Bug: #1644457
Change-Id: I0323c411126314ddf3d689dc3120b039256ae81a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1644457

Title:
  keypair quota error

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  The nova quota-show was error in key_pairs when add keypair.

  Steps to reproduce
  ==
  1.nova keypair-add k_001
  2.nova keypair-list
  3.nova quota-show --detail

  Expected result
  ===
  It is correct with key_pairs in_use.

  Actual result
  =
  Follow command "nova keypair-add k_001", keypair-list display k_001,
  but "nova quota-show --detail" was not correct with keypair in_use value.

  Environment
  ===
  1. Devstack with newton version

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1644457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1764282] Re: Multiple times retrieve information of project_id in Delete domain

2018-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/589027
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=44da48f013881e87f8ac099cf7462d00d804ddd3
Submitter: Zuul
Branch:master

commit 44da48f013881e87f8ac099cf7462d00d804ddd3
Author: wangxiyuan 
Date:   Mon Aug 6 15:05:25 2018 +0800

Remove redundant get_project call

This patched removed some redundant "get_project" calls when
deleting projects/domains.

Change-Id: Ife4dd18962077bac30fa1cecf7621cc86a62929c
Closes-bug: #1764282


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1764282

Title:
  Multiple times retrieve information of project_id in  Delete domain

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In Delete domain usecase, Redundant SQL queries are getting executed, which 
can lead to performance delay.
  In Delete domain use case, select query is executed multiple times to 
retrieve information of project_id. This must be reduced to enhance the 
performance.

  Code change is required for handling the redundant SQL queries.
  There is a need to change the code in keystone/token/backends.sql.py so that 
the extra queries will be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1764282/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786519] [NEW] debugging why NoValidHost with placement challenging

2018-08-10 Thread Chris Dent
Public bug reported:

With the advent of placement, the FilterScheduler no longer provides
granular information about which class of resource (disk, VCPU, RAM) is
not available in sufficient quantities to allow a host to be found.

This is because placement is now making those choices and does not (yet)
break down the results of its queries into easy to understand chunks. If
it returns zero results all you know is "we didn't have enough
resources". Nothing about which resources.

This can be fixed by changing the way in queries are made so that there
are a series of queries. After each one a report of how many results are
left can be made.

While this relatively straightforward to do for the (currently-)common
simple non-nested and non-sharing providers situation it will be more
difficult for the non-simple cases. Therefore, it makes sense to have
different code paths for simple and non-simple allocation candidate
queries. This will also result in performance gains for the common case.

See this email thread for additional discussion and reports of problems
in the wild: http://lists.openstack.org/pipermail/openstack-
dev/2018-August/132735.html

** Affects: nova
 Importance: High
 Assignee: Jay Pipes (jaypipes)
 Status: Confirmed


** Tags: placement rocky-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786519

Title:
  debugging why NoValidHost with placement challenging

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  With the advent of placement, the FilterScheduler no longer provides
  granular information about which class of resource (disk, VCPU, RAM)
  is not available in sufficient quantities to allow a host to be found.

  This is because placement is now making those choices and does not
  (yet) break down the results of its queries into easy to understand
  chunks. If it returns zero results all you know is "we didn't have
  enough resources". Nothing about which resources.

  This can be fixed by changing the way in queries are made so that
  there are a series of queries. After each one a report of how many
  results are left can be made.

  While this relatively straightforward to do for the (currently-)common
  simple non-nested and non-sharing providers situation it will be more
  difficult for the non-simple cases. Therefore, it makes sense to have
  different code paths for simple and non-simple allocation candidate
  queries. This will also result in performance gains for the common
  case.

  See this email thread for additional discussion and reports of
  problems in the wild: http://lists.openstack.org/pipermail/openstack-
  dev/2018-August/132735.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1786519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786318] Re: Volume status remains "detaching" after a failure to detach a volume due to DeviceDetachFailed

2018-08-10 Thread Matt Riedemann
** Changed in: nova/rocky
   Status: Fix Released => In Progress

** Tags added: rocky-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786318

Title:
  Volume status remains "detaching" after a failure to detach a volume
  due to DeviceDetachFailed

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress

Bug description:
  Description
  ===
  Volume status remains "detaching" after a failure to detach a volume due to 
DeviceDetachFailed

  Steps to reproduce
  ==
  Attempt to detach a volume while it is mounted and in-use within an instance.

  Expected result
  ===
  Volume detach fails and the volume returns to in-use.

  Actual result
  =
  Volume detach fails and the volume remains in a detaching state.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 R

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libirt + KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?
   
 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1786318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786346] Re: live migrations slow

2018-08-10 Thread Matt Riedemann
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786346

Title:
  live migrations slow

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed

Bug description:
  This is on pike with ff747792b8f5aefe1bebb01bdf49dacc01353348 (on
  b58c7f033771e3ea228e4b40c796d1bc95a087f5 to be percice).  Live
  migrations seem to be stuck at 1MiB/s for linuxbridge VMs.  The code
  actually looks like it should fail with a timeout (never see the
  network interface created debug message), not sure why it succeeds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1786346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786318] Re: Volume status remains "detaching" after a failure to detach a volume due to DeviceDetachFailed

2018-08-10 Thread Matt Riedemann
** Also affects: nova/rocky
   Importance: Medium
 Assignee: Lee Yarwood (lyarwood)
   Status: Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786318

Title:
  Volume status remains "detaching" after a failure to detach a volume
  due to DeviceDetachFailed

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  Fix Released

Bug description:
  Description
  ===
  Volume status remains "detaching" after a failure to detach a volume due to 
DeviceDetachFailed

  Steps to reproduce
  ==
  Attempt to detach a volume while it is mounted and in-use within an instance.

  Expected result
  ===
  Volume detach fails and the volume returns to in-use.

  Actual result
  =
  Volume detach fails and the volume remains in a detaching state.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 R

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libirt + KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?
   
 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1786318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786498] Re: placement api produces many warnings about policy scope check failures

2018-08-10 Thread Matt Riedemann
** Changed in: nova
 Assignee: Chris Dent (cdent) => Matt Riedemann (mriedem)

** Also affects: nova/rocky
   Importance: Medium
 Assignee: Matt Riedemann (mriedem)
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786498

Title:
  placement api produces many warnings about policy scope check failures

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress

Bug description:
  When oslo policy checks were added to placement, fixtures and
  functional tests were updated to hide warnings related to scope checks
  that cannot (yet) work in the way placement is managing policy.

  Those some warnings happen with every request on an actually running
  service. The warnings need to be stifled there too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1786498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786318] Re: Volume status remains "detaching" after a failure to detach a volume due to DeviceDetachFailed

2018-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/590439
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3a4b4b91b0bee514d47a67f6a5f9c4ab9ae7f7d8
Submitter: Zuul
Branch:master

commit 3a4b4b91b0bee514d47a67f6a5f9c4ab9ae7f7d8
Author: Lee Yarwood 
Date:   Thu Aug 9 17:43:50 2018 +0100

block_device: Rollback volumes to in-use on DeviceDetachFailed

The call to os-roll_detaching was dropped when
I3800b466a50b1e5f5d1e8c8a963d9a6258af67ee started to catch and reraise
DeviceDetachFailed. This call is required to ensure a volume returns to
an in-use state when the instance is unable to detach.

Closes-Bug: #1786318
Change-Id: I6b3dc09acc6f360806ce0ef8c8a65addbf4a8c51


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786318

Title:
  Volume status remains "detaching" after a failure to detach a volume
  due to DeviceDetachFailed

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  In Progress
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  Fix Released

Bug description:
  Description
  ===
  Volume status remains "detaching" after a failure to detach a volume due to 
DeviceDetachFailed

  Steps to reproduce
  ==
  Attempt to detach a volume while it is mounted and in-use within an instance.

  Expected result
  ===
  Volume detach fails and the volume returns to in-use.

  Actual result
  =
  Volume detach fails and the volume remains in a detaching state.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 R

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libirt + KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?
   
 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1786318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786498] [NEW] placement api produces many warnings about policy scope check failures

2018-08-10 Thread Chris Dent
Public bug reported:

When oslo policy checks were added to placement, fixtures and functional
tests were updated to hide warnings related to scope checks that cannot
(yet) work in the way placement is managing policy.

Those some warnings happen with every request on an actually running
service. The warnings need to be stifled there too.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress

** Affects: nova/rocky
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress


** Tags: placement rocky-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786498

Title:
  placement api produces many warnings about policy scope check failures

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress

Bug description:
  When oslo policy checks were added to placement, fixtures and
  functional tests were updated to hide warnings related to scope checks
  that cannot (yet) work in the way placement is managing policy.

  Those some warnings happen with every request on an actually running
  service. The warnings need to be stifled there too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1786498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786490] [NEW] Create/Delete/Edit Role button are not shown in Angular Role table when runserver is used

2018-08-10 Thread Akihiro Motoki
Public bug reported:

When I start horizon using django runserver (tox -e runserver), I cannot see 
all action buttons like Create Role/Edit Role/Delete Role in the AngularJS role 
table.
Note that it does not happen when I deploy horizon using apache.

It only happens in the angular implementation. The Angular
implementation might have some problem.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: angularjs

** Tags added: angularjs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1786490

Title:
  Create/Delete/Edit Role button are not shown in Angular Role table
  when runserver is used

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When I start horizon using django runserver (tox -e runserver), I cannot see 
all action buttons like Create Role/Edit Role/Delete Role in the AngularJS role 
table.
  Note that it does not happen when I deploy horizon using apache.

  It only happens in the angular implementation. The Angular
  implementation might have some problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1786490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1782576] Re: Logging - No SG-log data found at /var/log/syslog

2018-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/587681
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=ced78395a7952d0e616055892645fd2a6165833f
Submitter: Zuul
Branch:master

commit ced78395a7952d0e616055892645fd2a6165833f
Author: Nguyen Phuong An 
Date:   Wed Aug 1 10:55:55 2018 +0700

Fix no ACCEPT event can get for security group logging

Currently, we cannot get ACCEPT packet log because there are some
changed related to ovs firewall code since ovs firewall logging has
been merged.

Regarding to performance perspective, we only log first accepted packet.
So we only need to forward first accepted packet of each connection
session to table 91 and table 92.

So this patch fixes these issues.

Closes-Bug: #1782576
Change-Id: Ib6ced838a7ec6d5c459a8475318556001c31bdf0


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1782576

Title:
  Logging - No SG-log data found at /var/log/syslog

Status in neutron:
  Fix Released

Bug description:
  When I created log-resource with security_group, log data didn't show
  at /var/log/syslog at all.

  [Environment]
  $ lsb_release -a; uname -a
  No LSB modules are available.
  Distributor ID: Ubuntu
  Description:Ubuntu 16.04.4 LTS
  Release:16.04
  Codename:   xenial
  Linux kolla 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 08:53:28 UTC 2018 
x86_64 x86_64 x86_64 GNU/Linux

  devstack all-in-one

  [Configuration]

  /etc/neutron/neutron.conf
  service_plugins = 
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,log

  /etc/neutron/plugins/ml2/ml2_conf.ini
  [securitygroup]
  firewall_driver = openvswitch
  [agent]
  extensions = log

  [Operation]
  $ openstack server create --image cirros-0.3.5-x86_64-disk --flavor c1 
--network private vm1
  $ openstack network log create --resource-type security_group --resource 
 --enable --event ALL sg-log

  [ovs flow log]
  I compared following conditions with'$ovs-ofctl dump-flows br-int':
  http://paste.openstack.org/compare/726273/726272/

  1. Before creating log-resource
  2. After created log-resource

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1782576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1785105] Re: Add multi-store support

2018-08-10 Thread Brian Rosmaita
https://review.openstack.org/#/c/576075/ was included in Rocky RC-1

** Changed in: glance
   Importance: Undecided => High

** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1785105

Title:
  Add multi-store support

Status in Glance:
  Fix Released

Bug description:
  https://review.openstack.org/574582
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit cb45edf5c81f7d09c9ef0b88d40d56b4750beb10
  Author: Abhishek Kekane 
  Date:   Mon May 7 10:30:01 2018 +

  Add multi-store support
  
  Made provision for multi-store support. Added new config option
  'enabled_backends' which will be a comma separated Key:Value pair
  of store identifier and store type.
  
  DocImpact
  Depends-On: https://review.openstack.org/573648
  Implements: blueprint multi-store
  
  Change-Id: I9cfa066bdce51619a78ce86a8b1f1f8d05e5bfb6

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1785105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786472] [NEW] Scenario test_connectivity_min_max_mtu fails when cirros is used

2018-08-10 Thread Slawek Kaplonski
Public bug reported:

Scenario test 
neutron_tempest_plugin.scenario.test_mtu.NetworkWritableMtuTest.test_connectivity_min_max_mtu
 fails if cirros image is used.
This test is trying to do ping without packets fragmentation, that requires to 
pass "-M do" option to ping command and in cirros it's not available.

So this test should be skipped if "image_is_advanced=False"

** Affects: neutron
 Importance: Low
 Assignee: Slawek Kaplonski (slaweq)
 Status: In Progress


** Tags: tempest

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1786472

Title:
  Scenario test_connectivity_min_max_mtu fails when cirros is used

Status in neutron:
  In Progress

Bug description:
  Scenario test 
neutron_tempest_plugin.scenario.test_mtu.NetworkWritableMtuTest.test_connectivity_min_max_mtu
 fails if cirros image is used.
  This test is trying to do ping without packets fragmentation, that requires 
to pass "-M do" option to ping command and in cirros it's not available.

  So this test should be skipped if "image_is_advanced=False"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1786472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786318] Re: Volume status remains "detaching" after a failure to detach a volume due to DeviceDetachFailed

2018-08-10 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => Medium

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/queens
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786318

Title:
  Volume status remains "detaching" after a failure to detach a volume
  due to DeviceDetachFailed

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  Description
  ===
  Volume status remains "detaching" after a failure to detach a volume due to 
DeviceDetachFailed

  Steps to reproduce
  ==
  Attempt to detach a volume while it is mounted and in-use within an instance.

  Expected result
  ===
  Volume detach fails and the volume returns to in-use.

  Actual result
  =
  Volume detach fails and the volume remains in a detaching state.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 R

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libirt + KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?
   
 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1786318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786451] [NEW] Nova Instance creation is failing with request_spec.instance_group appears to be None

2018-08-10 Thread M V P Nitesh
Public bug reported:

I'm trying to create a magnum cluster on Devstack(both Queens and
master). Cluster creation is failing.

Nova Instance is going to error state with the below error:

{"message": "'NoneType' object has no attribute 'hosts'", "code": 500, 
"details": "  File   
\"/opt/stack/nova/nova/conductor/manager.py\", line 585, in build_instances 

 instance_uuids,
 
 return_alternates=True)
 
   File \"/opt/stack/nova/nova/conductor/manager.py\", line 720, in 
 
 _schedule_instances
 
 scheduler_utils.setup_instance_group(context, request_spec)
 
   File 
 
 \"/opt/stack/nova/nova/scheduler/utils.py\", line 836, in setup_instance_group 
 

 
 request_spec.instance_group.hosts = list(group_info.hosts) 
 
 ", "created": "2018-08-10T08:32:01Z"}

Getting this error http://paste.openstack.org/show/727798/ in nova-cell-
region and http://paste.openstack.org/show/727799/  in nova-cell-child

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1786451

Title:
  Nova Instance creation is failing with  request_spec.instance_group
  appears to be None

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm trying to create a magnum cluster on Devstack(both Queens and
  master). Cluster creation is failing.

  Nova Instance is going to error state with the below error:

  {"message": "'NoneType' object has no attribute 'hosts'", "code": 500, 
"details": "  File   
  \"/opt/stack/nova/nova/conductor/manager.py\", line 585, in build_instances   
  
   instance_uuids,  
   
   return_alternates=True)  
   
 File \"/opt/stack/nova/nova/conductor/manager.py\", line 720, in   
   
   _schedule_instances  
   
   scheduler_utils.setup_instance_group(context, request_spec)  
   
 File   
   
   \"/opt/stack/nova/nova/scheduler/utils.py\", line 836, in 
setup_instance_group  

   
   request_spec.instance_group.hosts = list(group_info.hosts)   
   
   ", "created": "2018-08-10T08:32:01Z"}

  Getting this error http://paste.openstack.org/show/727798/ in nova-
  cell-region and http://paste.openstack.org/show/727799/  in nova-cell-
  child

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1786451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786213] Re: Metering agent: failed to run ip netns command

2018-08-10 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/590215
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=40d92d40ec0953f82a040c2197304934efa3e9e1
Submitter: Zuul
Branch:master

commit 40d92d40ec0953f82a040c2197304934efa3e9e1
Author: Dongcan Ye 
Date:   Thu Aug 9 10:05:36 2018 +

Config privsep in the metering agent

Enable privsep to execute ip netns or other commands.

Change-Id: I4e20a1e92c0d154b76615437efe5eced4e0cc6bd
Closes-Bug: #1786213


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1786213

Title:
  Metering agent: failed to run ip netns command

Status in neutron:
  Fix Released

Bug description:
  While neutron-metering-agent start, it fails as following:
  2018-08-09 03:15:08.504 10637 INFO oslo.privsep.daemon 
[req-6dd24e96-e82b-49a5-9d33-80e1ff572502 - - - - -] Running privsep helper: 
['sudo', 'privsep-helper', '--config-file', 
'/usr/share/neutron/neutron-dist.conf', '--config-file', 
'/etc/neutron/neutron.conf', '--config-file', 
'/etc/neutron/metering_agent.ini', '--config-dir', 
'/etc/neutron/conf.d/neutron-metering-agent', '--privsep_context', 
'neutron.privileged.default', '--privsep_sock_path', 
'/tmp/tmp_JYWYs/privsep.sock']
  2018-08-09 03:15:08.525 10637 WARNING oslo.privsep.daemon [-] privsep log:
  2018-08-09 03:15:08.526 10637 WARNING oslo.privsep.daemon [-] privsep log: We 
trust you have received the usual lecture from the local System
  2018-08-09 03:15:08.527 10637 WARNING oslo.privsep.daemon [-] privsep log: 
Administrator. It usually boils down to these three things:
  2018-08-09 03:15:08.527 10637 WARNING oslo.privsep.daemon [-] privsep log:
  2018-08-09 03:15:08.528 10637 WARNING oslo.privsep.daemon [-] privsep log:
 #1) Respect the privacy of others.
  2018-08-09 03:15:08.528 10637 WARNING oslo.privsep.daemon [-] privsep log:
 #2) Think before you type.
  2018-08-09 03:15:08.528 10637 WARNING oslo.privsep.daemon [-] privsep log:
 #3) With great power comes great responsibility.
  2018-08-09 03:15:08.529 10637 WARNING oslo.privsep.daemon [-] privsep log:
  2018-08-09 03:15:08.531 10637 WARNING oslo.privsep.daemon [-] privsep log: 
sudo: no tty present and no askpass program specified
  2018-08-09 03:15:08.544 10637 CRITICAL oslo.privsep.daemon 
[req-6dd24e96-e82b-49a5-9d33-80e1ff572502 - - - - -] privsep helper command 
exited non-zero (1)

  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task 
[req-6dd24e96-e82b-49a5-9d33-80e1ff572502 - - - - -] Error during 
MeteringAgentWithStateReport._sync_routers_task: FailedToDropPrivileges: 
privsep helper command exited non-zero (1)
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task Traceback 
(most recent call last):
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task task(self, 
context)
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/services/metering/agents/metering_agent.py",
 line 189, in _sync_routers_task
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task 
self._update_routers(context, routers)
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/services/metering/agents/metering_agent.py",
 line 212, in _update_routers
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task 
'update_routers')
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in 
inner
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task return 
f(*args, **kwargs)
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/services/metering/agents/metering_agent.py",
 line 166, in _invoke_driver
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task return 
getattr(self.metering_driver, func_name)(context, meterings)
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 67, in wrapper
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task return 
method(*args, **kwargs)
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/site-packages/neutron/services/metering/drivers/iptables/iptables_driver.py",
 line 151, in update_routers
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task 
self._process_associate_metering_label(router)
  2018-08-09 03:15:08.546 10637 ERROR oslo_service.periodic_task   File 

[Yahoo-eng-team] [Bug 1771773] Re: Ssl2/3 should not be used for secure VNC access

2018-08-10 Thread Daniel Berrange
Sorry, I didn't mean to suggest we should abandon the change/bug, as not
all distros have crypto policy support systemwide.

Rather, that we should

1. make sure the out of the box behaviour is to honour openssl defaults
2. provide a nova.conf setting for the protocol version, which allows an 
ordered list of versions to be set by the admin. eg might set something like  
vnc_tls_protocol = [ "tls1.3", "tls1.2"]

** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771773

Title:
  Ssl2/3 should not be used for secure VNC access

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  This report is based on Bandit scanner results.

  On
  
https://git.openstack.org/cgit/openstack/nova/tree/nova/console/rfb/authvencrypt.py?h=refs/heads/master#n137

  137 wrapped_sock = ssl.wrap_socket(

  wrap_socket is used without ssl_version that means SSLv23 by default.
  As server part (QEMU) is based on gnutls supporting all modern TLS versions
  it is possible to use stricter tls version on the client (TLSv1.2).
  Another option is to make this param configurable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786413] [NEW] Cannot load neutron_fwaas.conf by neutron-api

2018-08-10 Thread Yushiro FURUKAWA
Public bug reported:

If we deploy devstack with neutron-fwaas(v2), neutron_fwaas.conf didn't
load from neutron-api/q-svc.service.  Therefore, when we enabled both
neutron-fwaas and neutron-vpnaas, failed to start neutron-
api/q-svc.service with the following error:

  No providers specified for 'FIREWALL_V2' service, exiting

Here is a sample of local.conf: http://paste.openstack.org/show/727445/

$ cat /etc/systemd/system/devstack@neutron-api.service 
...(snip)...
[Service]
ExecReload = /bin/kill -HUP $MAINPID
TimeoutStopSec = 300
KillMode = process
ExecStart = /usr/local/bin/neutron-server  --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
User = ubuntu
...(snip)...

It should be loaded as  "--config-file /etc/neutron/neutron_fwaas.conf"
for neutron-server.

** Affects: neutron
 Importance: Medium
 Assignee: Yushiro FURUKAWA (y-furukawa-2)
 Status: In Progress


** Tags: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1786413

Title:
  Cannot load neutron_fwaas.conf by neutron-api

Status in neutron:
  In Progress

Bug description:
  If we deploy devstack with neutron-fwaas(v2), neutron_fwaas.conf
  didn't load from neutron-api/q-svc.service.  Therefore, when we
  enabled both neutron-fwaas and neutron-vpnaas, failed to start
  neutron-api/q-svc.service with the following error:

No providers specified for 'FIREWALL_V2' service, exiting

  Here is a sample of local.conf:
  http://paste.openstack.org/show/727445/

  $ cat /etc/systemd/system/devstack@neutron-api.service 
  ...(snip)...
  [Service]
  ExecReload = /bin/kill -HUP $MAINPID
  TimeoutStopSec = 300
  KillMode = process
  ExecStart = /usr/local/bin/neutron-server  --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  User = ubuntu
  ...(snip)...

  It should be loaded as  "--config-file
  /etc/neutron/neutron_fwaas.conf" for neutron-server.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1786413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp