[Yahoo-eng-team] [Bug 1823115] [NEW] When importing a public key, Keypair name field input validation disallows underscore

2019-04-03 Thread datakid
Public bug reported:

Discovered on horizon/queens.

When creating a key pair in web interface, underscores are allowed in
the name of the key pair.

When importing a public key, putting an underscore in the field titled
"Key Pair Name" results in an error "Key Pair Name is formatted
incorrectly"

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1823115

Title:
  When importing a public key, Keypair name field input validation
  disallows underscore

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Discovered on horizon/queens.

  When creating a key pair in web interface, underscores are allowed in
  the name of the key pair.

  When importing a public key, putting an underscore in the field titled
  "Key Pair Name" results in an error "Key Pair Name is formatted
  incorrectly"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1823115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823113] [NEW] On the copy form under container panel, if the Destination Container does not exist and the Destination Object is input, the Destination object will be wrong, but

2019-04-03 Thread pengyuesheng
Public bug reported:

On the copy form under container panel, if the Destination Container
does not exist and the Destination Object is input, the Destination
object will be wrong, but no error message will be reported; if the
Destination container exists, the Destination  object will input an
existing object, which will cause the Destination container error.

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1823113

Title:
  On the copy form under container panel, if the Destination Container
  does not exist and the Destination Object is input, the Destination
  object will be wrong, but no error message will be reported; if the
  Destination container exists, the Destination  object will input an
  existing object, which will cause the Destination container error.

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  On the copy form under container panel, if the Destination Container
  does not exist and the Destination Object is input, the Destination
  object will be wrong, but no error message will be reported; if the
  Destination container exists, the Destination  object will input an
  existing object, which will cause the Destination container error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1823113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823109] [NEW] On the Edit form under container panel, the file is required, but there is no required mark.

2019-04-03 Thread pengyuesheng
Public bug reported:

On the Edit form under container panel, the file is required, but there
is no required mark.

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1823109

Title:
  On the Edit form under container panel, the file is required, but
  there is no required mark.

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  On the Edit form under container panel, the file is required, but
  there is no required mark.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1823109/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823104] [NEW] CellMappingPayload in select_destinations versioned notification sends sensitive database_connection and transport_url information

2019-04-03 Thread Matt Riedemann
Public bug reported:

As of this change in Stein:

https://review.openstack.org/#/c/508506/28/nova/notifications/objects/request_spec.py@334

Which is not yet officially released, but is in the 19.0.0.0rc1, the
select_destinations versioned notification payload during a move
operation (resize, cold/live migrate, unshelve, evacuate) will send the
cell database_connection URL and MQ transport_url information which
contains credentials to connect directly to the cell DB and MQ, which
even though notifications are meant to be internal within openstack
services, seems like a pretty bad idea. IOW, just because it's internal
to openstack doesn't mean nova needs to give ceilometer the keys to it's
cell databases.

There seems to be no justification in the change for *why* this
information was needed in the notification payload, it seemed to be
added simply for completeness.

** Affects: nova
 Importance: High
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged

** Affects: nova/stein
 Importance: Undecided
 Status: New


** Tags: notifications security stein-rc-potential

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Also affects: nova/stein
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1823104

Title:
  CellMappingPayload in select_destinations versioned notification sends
  sensitive database_connection and transport_url information

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) stein series:
  New

Bug description:
  As of this change in Stein:

  
https://review.openstack.org/#/c/508506/28/nova/notifications/objects/request_spec.py@334

  Which is not yet officially released, but is in the 19.0.0.0rc1, the
  select_destinations versioned notification payload during a move
  operation (resize, cold/live migrate, unshelve, evacuate) will send
  the cell database_connection URL and MQ transport_url information
  which contains credentials to connect directly to the cell DB and MQ,
  which even though notifications are meant to be internal within
  openstack services, seems like a pretty bad idea. IOW, just because
  it's internal to openstack doesn't mean nova needs to give ceilometer
  the keys to it's cell databases.

  There seems to be no justification in the change for *why* this
  information was needed in the notification payload, it seemed to be
  added simply for completeness.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1823104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823100] [NEW] Ephemeral disk not mounted when new instance requires reformat of the volume

2019-04-03 Thread Jason Zions
Public bug reported:

Each Azure VM is provided with an ephemeral disk, and the cloud-init
configuration supplied in the VM image requests the volume be mounted
under /mnt. Each new ephemeral disk is formatted for NTFS rather than
ext4 or another Linux filesystem. The Azure datasource detects this (in
.activate()) and makes sure the disk_setup and mounts modules run. The
disk_setup module formats the volume; the mounts module sees that the
ephemeral volume is configured to be mounted and it adds the appropriate
entry to /etc/fstab. After updating fstab, the mounts volume invokes the
"mount -a" command to mount (or unmount) volumes according to fstab.
That's how it all works during the initial provisioning of a new VM.

When a VM gets rehosted for any reason (service heal, stop/deallocate
and restart), the ephemeral drive provided to the previous instance is
lost. A new ephemeral volume is supplied, also formatted ntfs. When the
VM is booted, systemd's mnt.mount unit runs and complains about the
unmountable ntfs volume that's still in /etc/fstab. The disk_setup
module properly formats the volume. However, the mounts module sees the
volume is *already* in fstab, sees that it didn't change anything, so it
doesn't run "mount -a". The net result: the volume doesn't get mounted.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1823100

Title:
  Ephemeral disk not mounted when new instance requires reformat of the
  volume

Status in cloud-init:
  New

Bug description:
  Each Azure VM is provided with an ephemeral disk, and the cloud-init
  configuration supplied in the VM image requests the volume be mounted
  under /mnt. Each new ephemeral disk is formatted for NTFS rather than
  ext4 or another Linux filesystem. The Azure datasource detects this
  (in .activate()) and makes sure the disk_setup and mounts modules run.
  The disk_setup module formats the volume; the mounts module sees that
  the ephemeral volume is configured to be mounted and it adds the
  appropriate entry to /etc/fstab. After updating fstab, the mounts
  volume invokes the "mount -a" command to mount (or unmount) volumes
  according to fstab. That's how it all works during the initial
  provisioning of a new VM.

  When a VM gets rehosted for any reason (service heal, stop/deallocate
  and restart), the ephemeral drive provided to the previous instance is
  lost. A new ephemeral volume is supplied, also formatted ntfs. When
  the VM is booted, systemd's mnt.mount unit runs and complains about
  the unmountable ntfs volume that's still in /etc/fstab. The disk_setup
  module properly formats the volume. However, the mounts module sees
  the volume is *already* in fstab, sees that it didn't change anything,
  so it doesn't run "mount -a". The net result: the volume doesn't get
  mounted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1823100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822613] Re: Inefficient queries inside online_data_migrations

2019-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/649648
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=cec1808050495aa43a2b67058077063bf3b6f4ed
Submitter: Zuul
Branch:master

commit cec1808050495aa43a2b67058077063bf3b6f4ed
Author: Matt Riedemann 
Date:   Wed Apr 3 11:35:37 2019 -0400

Drop migrate_keypairs_to_api_db data migration

This was added in Newton:

  I97b72ae3e7e8ea3d6b596870d8da3aaa689fd6b5

And was meant to migrate keypairs from the cell
(nova) DB to the API DB. Before that though, the
keypairs per instance would be migrated to the
instance_extra table in the cell DB. The migration
to instance_extra was dropped in Queens with change:

  Ie83e7bd807c2c79e5cbe1337292c2d1989d4ac03

As the commit message on ^ mentions, the 345 cell
DB schema migration required that the cell DB keypairs
table was empty before you could upgrade to Ocata.

The migrate_keypairs_to_api_db routine only migrates
any keypairs to the API DB if there are entries in the
keypairs table in the cell DB, but because of that blocker
migration in Ocata that cannot be the case anymore, so
really migrate_keypairs_to_api_db is just wasting time
querying the database during the online_data_migrations
routine without it actually migrating anything, so we
should just remove it.

Change-Id: Ie56bc411880c6d1c04599cf9521e12e8b4878e1e
Closes-Bug: #1822613


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1822613

Title:
  Inefficient queries inside online_data_migrations

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The online_data_migrations should be ran after an upgrade and contains
  a list of tasks to do to backfill information after an upgrade,
  however, some of those queries are extremely inefficient which results
  in this online data migrations taking an unacceptable period of time.
  The SQL query that takes a really long time in question:

  > SELECT count(*) AS count_1
  > FROM (SELECT instance_extra.created_at AS instance_extra_created_at,
  > instance_extra.updated_at AS instance_extra_updated_at,
  > instance_extra.deleted_at AS instance_extra_deleted_at,
  > instance_extra.deleted AS instance_extra_deleted, instance_extra.id AS
  > instance_extra_id, instance_extra.instance_uuid AS
  > instance_extra_instance_uuid
  > FROM instance_extra
  > WHERE instance_extra.keypairs IS NULL AND instance_extra.deleted = 0) AS 
anon_1

  It would also be good for us to *not* run a data migration again if we
  know we've already gotten found=0 when online_data_migrations is
  running in "forever-until-complete".  Also, the value of 50 rows per
  run in that mode is quite small.

  ref: http://lists.openstack.org/pipermail/openstack-
  discuss/2019-April/004397.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1822613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823099] [NEW] Azure datasource sometimes fails to find wireserver endpoint

2019-04-03 Thread Jason Zions
Public bug reported:

The Azure datasource needs to communicate with the "Wireserver" control
plane endpoint during provisioning. The IP address of the endpoint is
passed to the VM as option 245 via DHCP. Code to retrieve that address
is remarkably fragile across distros and in the face of changes users
can make in choosing or configuring a network manager. When the
datasource fails to find the endpoint address, provisioning will often
fail, leaving an uncommunicative VM behind. While error messages are
logged, the very nature of the problem (incorrect network setup and
incomplete ssh key provisioning, among other things) means those log
messages are difficult to access from outside the VM, and access to the
VM itself (e.g. via ssh) is often blocked.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1823099

Title:
  Azure datasource sometimes fails to find wireserver endpoint

Status in cloud-init:
  New

Bug description:
  The Azure datasource needs to communicate with the "Wireserver"
  control plane endpoint during provisioning. The IP address of the
  endpoint is passed to the VM as option 245 via DHCP. Code to retrieve
  that address is remarkably fragile across distros and in the face of
  changes users can make in choosing or configuring a network manager.
  When the datasource fails to find the endpoint address, provisioning
  will often fail, leaving an uncommunicative VM behind. While error
  messages are logged, the very nature of the problem (incorrect network
  setup and incomplete ssh key provisioning, among other things) means
  those log messages are difficult to access from outside the VM, and
  access to the VM itself (e.g. via ssh) is often blocked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1823099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819764] Re: Excessive warnings in nova-compute logs about unexpected network-vif-unplugged events during live migration

2019-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/642877
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=521e59224e8a595af66ec898f1bb739a9dbe7d97
Submitter: Zuul
Branch:master

commit 521e59224e8a595af66ec898f1bb739a9dbe7d97
Author: Matt Riedemann 
Date:   Tue Mar 12 16:11:04 2019 -0400

Don't warn on network-vif-unplugged event during live migration

The "network-vif-unplugged" event is expected during live migration
and nothing listens for it so we should not log a warning in that case.

Change-Id: I8fd8df211670f1abbcb7b496e62589295922bdc1
Closes-Bug: #1819764


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1819764

Title:
  Excessive warnings in nova-compute logs about unexpected network-vif-
  unplugged events during live migration

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/63/631363/22/check/nova-grenade-live-
  migration/1a03cbc/logs/subnode-2/screen-n-cpu.txt.gz?level=WARNING

  Mar 11 07:50:38.515198 ubuntu-xenial-ovh-bhs1-0003641906 nova-
  compute[3321]: WARNING nova.compute.manager [req-883bec65-937b-
  4ed6-9d05-f0282ed17b75 req-11ffd59a-bf70-4cec-9678-36d3335ef766
  service nova] [instance: faf3444e-f788-4ff4-992a-b14ce53ef8f8]
  Received unexpected event network-vif-unplugged-85392fed-4d68-4e01
  -818a-e004a33d95b6 for instance with vm_state paused and task_state
  migrating.

  There are 22 instances of that same type of warning in that compute
  log, and it's all over our CI jobs that run live migration tests:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Received%20unexpected%20event
  %20network-vif-
  
unplugged%5C%22%20AND%20message%3A%5C%22and%20task_state%20migrating.%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22=7d

  We don't listen for network-vif-unplugged events during live migration
  and the unplug events are expected so we shouldn't warn about those.

  We see quite a bit of this for other move operations like resize
  revert and shelve offload as well, and hard reboots, but we could deal
  with those separately:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Received%20unexpected%20event
  %20network-vif-
  
unplugged%5C%22%20AND%20NOT%20message%3A%5C%22and%20task_state%20None%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22=7d

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1819764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823089] [NEW] modal js code can't handle file download

2019-04-03 Thread Adrian Turjak
Public bug reported:

The modal js code doesn't know how to process and handle when the
response data is a file attachment. What it does is append the data from
the file to the modal_wrapper.

My reason for hitting this is that I have a form which asks for some
information, and then the handle function returns a file.

```
content = render_to_string(
template, context, request=request)
content = '\n'.join([line for line in content.split('\n')
 if line.strip()])
response = http.HttpResponse(
content, content_type="text/plain")

filename = 'openstack_backup_codes.txt'
disposition = 'attachment; filename=%s' % filename
response['Content-Disposition'] = disposition.encode(
'utf-8')
response['Content-Length'] = str(len(response.content))
return response
```

** Affects: horizon
 Importance: Undecided
 Assignee: Adrian Turjak (adriant-y)
 Status: In Progress

** Summary changed:

- modal js code breaks on file download
+ modal js code can't handle file download

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1823089

Title:
  modal js code can't handle file download

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The modal js code doesn't know how to process and handle when the
  response data is a file attachment. What it does is append the data
  from the file to the modal_wrapper.

  My reason for hitting this is that I have a form which asks for some
  information, and then the handle function returns a file.

  ```
  content = render_to_string(
  template, context, request=request)
  content = '\n'.join([line for line in content.split('\n')
   if line.strip()])
  response = http.HttpResponse(
  content, content_type="text/plain")

  filename = 'openstack_backup_codes.txt'
  disposition = 'attachment; filename=%s' % filename
  response['Content-Disposition'] = disposition.encode(
  'utf-8')
  response['Content-Length'] = str(len(response.content))
  return response
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1823089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823084] [NEW] DataSourceAzure doesn't rebuild network-config after reboot

2019-04-03 Thread Jason Zions
Public bug reported:

After merge 365065 (commit 0dc3a77f4), when an Azure VM (previously
provisioned via cloud-init) is rebooted, DataSourceAzure fails to
recreate a NetworkConfig, with multiple exceptions raised and caught.

When the ds is restored from obj.pkl in the instance directory,
self._network_config is reloaded as the string "_unset" rather than as a
dictionary. Comments in the datasource indicate this was a deliberate
decision; the intent was to force the datasource to rebuild the network
configuration at each boot based on information fetched from the Azure
control plane. The self._network_config dict is overwritten very quickly
after it is generated and used; the net result is that the "_unset"
string is deliberately saved as obj['ds']['network_config']

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init collect-logs output"
   
https://bugs.launchpad.net/bugs/1823084/+attachment/5252625/+files/cloud-init.tar.gz

** Merge proposal linked:
   https://code.launchpad.net/~jasonzio/cloud-init/+git/cloud-init/+merge/365377

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1823084

Title:
  DataSourceAzure doesn't rebuild network-config after reboot

Status in cloud-init:
  New

Bug description:
  After merge 365065 (commit 0dc3a77f4), when an Azure VM (previously
  provisioned via cloud-init) is rebooted, DataSourceAzure fails to
  recreate a NetworkConfig, with multiple exceptions raised and caught.

  When the ds is restored from obj.pkl in the instance directory,
  self._network_config is reloaded as the string "_unset" rather than as
  a dictionary. Comments in the datasource indicate this was a
  deliberate decision; the intent was to force the datasource to rebuild
  the network configuration at each boot based on information fetched
  from the Azure control plane. The self._network_config dict is
  overwritten very quickly after it is generated and used; the net
  result is that the "_unset" string is deliberately saved as
  obj['ds']['network_config']

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1823084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821938] Re: No nova hypervisor can be enabled on workers with QAT devices

2019-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/649409
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e7ae6c65cd24fb3e0776fac80fbab2ab16e9d9ed
Submitter: Zuul
Branch:master

commit e7ae6c65cd24fb3e0776fac80fbab2ab16e9d9ed
Author: Sean Mooney 
Date:   Tue Apr 2 18:27:24 2019 +0100

Libvirt: gracefully handle non-nic VFs

As part of adding support for bandwidth based scheduling
I038867c4094d79ae4a20615ab9c9f9e38fcc2e0a introduced
automatic discovery of parent netdev names for PCIe
virtual functions.

Nova's PCI passthrough support was originally developed for
Intel QAT devices and other generic PCI devices. Later support
for Neutron based SR-IOV NIC was added.

The PCI-SIG SR-IOV specification while most often used by NIC
vendors to virtualise a NIC in hardware was designed for devices
of any PCIe class. Support for Intel's QAT device and other
accelerators like AMD's SRIOV based vGPU have therefore been
regressed by the introduction of the new parent_ifname lookup code.

This change simply catches the exception that would be raised
when pci_utils.get_ifname_by_pci_address is called on generic
VFs allowing a graceful fallback to the previous behaviour.

Change-Id: Ib3811f828246311d90b0e3ba71c162c03fb8fe5a
Closes-Bug: #1821938


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821938

Title:
  No nova hypervisor can be enabled on workers with QAT devices

Status in OpenStack Compute (nova):
  Fix Released
Status in StarlingX:
  In Progress

Bug description:
  Brief Description
  -
  Unable to enable a host as nova hypervisor due to pci device cannot be found 
if the host has QAT devices (C62x or DH895XCC) configured.

  Severity
  
  Major

  Steps to Reproduce
  --
  - Install and configure a system where worker nodes have QAT devices 
configured. e.g.,
  [wrsroot@controller-0 ~(keystone_admin)]$ system host-device-list compute-0
  
+--+--+--+---+---+---+-++---+-+
  | name | address | class id | vendor id | device id | class name | vendor 
name | device name | numa_node | enabled |
  
+--+--+--+---+---+---+-++---+-+
  | pci__09_00_0 | :09:00.0 | 0b4000 | 8086 | 0435 | Co-processor | 
Intel Corporation | DH895XCC Series QAT | 0 | True |
  | pci__0c_00_0 | :0c:00.0 | 03 | 102b | 0522 | VGA compatible 
controller | Matrox Electronics Systems Ltd. | MGA G200e [Pilot] ServerEngines 
(SEP1) | 0 | True |
  
+--+--+--+---+---+---+-++---+-+

  compute-0:~$ lspci | grep QAT
  09:00.0 Co-processor: Intel Corporation DH895XCC Series QAT
  09:01.0 Co-processor: Intel Corporation DH895XCC Series QAT Virtual Function
  09:01.1 Co-processor: Intel Corporation DH895XCC Series QAT Virtual Function
  ...

  - check nova hypervisor-list

  Expected Behavior
  --
  - Nova hypervisors exist on system

  Actual Behavior
  
  [wrsroot@controller-0 ~(keystone_admin)]$ nova hypervisor-list
  ++-+---++
  | ID | Hypervisor hostname | State | Status |
  ++-+---++
  ++-+---++

  Reproducibility
  ---
  Reproducible

  System Configuration
  
  Any system type with QAT devices configured on worker node

  Branch/Pull Time/Commit
  ---
  stx master as of 2019-03-18

  Last Pass
  --
  on f/stein branch in early feb

  Timestamp/Logs
  --
  # nova-compute pods are spewing errors so they can't register themselves 
properly as hypervisors:
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager 
[req-4f652d4c-da7e-4516-9baa-915265c3fdda - - - - -] Error updating resources 
for node compute-0.: PciDeviceNotFoundById: PCI device :09:02.3 not found
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager Traceback (most 
recent call last):
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/manager.py", line 
7956, in _update_available_resource_for_node
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager startup=startup)
  2019-03-25 18:46:49,899.899 62394 ERROR nova.compute.manager File 

[Yahoo-eng-team] [Bug 1823059] [NEW] Specify FROM clause to join from in test_floatingip_via_router_interface_returns_201

2019-04-03 Thread Rodolfo Alonso
Public bug reported:

With SQLAlchemy===1.3.2 [1], in test case 
"test_floatingip_via_router_interface_returns_201", we need to specify which 
FROM clause to join from, when multiple FROMS are present [2]. In this case:
models_v2.Port.id == models_v2.IPAllocation.port_id

[1] https://review.openstack.org/#/c/649508
[2] http://paste.openstack.org/show/748825/

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1823059

Title:
  Specify FROM clause to join from in
  test_floatingip_via_router_interface_returns_201

Status in neutron:
  In Progress

Bug description:
  With SQLAlchemy===1.3.2 [1], in test case 
"test_floatingip_via_router_interface_returns_201", we need to specify which 
FROM clause to join from, when multiple FROMS are present [2]. In this case:
  models_v2.Port.id == models_v2.IPAllocation.port_id

  [1] https://review.openstack.org/#/c/649508
  [2] http://paste.openstack.org/show/748825/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1823059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823043] [NEW] Docs insufficiently clear on the intersection of availalability zones, force, and cold and live migrations

2019-04-03 Thread Chris Dent
Public bug reported:


It's hard to find a single place in the nova docs where the impact of 
availability zones (including default) on the capabilities of live or cold 
migrations is clear

In https://docs.openstack.org/nova/latest/user/aggregates.html
#availability-zones-azs is probably a good place to describe what's
going on. Some of the rules are discussed in IRC at

http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-
nova.2019-04-03.log.html#t2019-04-03T15:22:39

The gist is that when a server is created, if it is created in an AZ
(either an explicit one, or in the default 'nova' zone) it is required
to stay in that AZ for all move operations unless there is a force,
which can happen in a live migrate or evacuate.

(Caveats abound in this area, see the IRC log for more discussion which
may help to flavor the docs being created.)

The reasoning for this, as far as I can tell, is that requesting an AZ
is a part of the boot constraints and we don't want the moved server to
be in violation of its own constraints.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Confirmed


** Tags: doc docs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1823043

Title:
  Docs insufficiently clear on the intersection of availalability zones,
  force, and cold and live migrations

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  
  It's hard to find a single place in the nova docs where the impact of 
availability zones (including default) on the capabilities of live or cold 
migrations is clear

  In https://docs.openstack.org/nova/latest/user/aggregates.html
  #availability-zones-azs is probably a good place to describe what's
  going on. Some of the rules are discussed in IRC at

  http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-
  nova.2019-04-03.log.html#t2019-04-03T15:22:39

  The gist is that when a server is created, if it is created in an AZ
  (either an explicit one, or in the default 'nova' zone) it is required
  to stay in that AZ for all move operations unless there is a force,
  which can happen in a live migrate or evacuate.

  (Caveats abound in this area, see the IRC log for more discussion
  which may help to flavor the docs being created.)

  The reasoning for this, as far as I can tell, is that requesting an AZ
  is a part of the boot constraints and we don't want the moved server
  to be in violation of its own constraints.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1823043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823038] [NEW] Neutron-keepalived-state-change fails to check initial router state

2019-04-03 Thread Slawek Kaplonski
Public bug reported:

As fix for bug https://bugs.launchpad.net/neutron/+bug/1818614 we added
to neutron-keepalived-state-change monitor possibility to check initial
status of router (master or slave).

Unfortunately for some reason I see now in journal log of functional job
that this check is failing with error like:

Apr 03 09:19:09 ubuntu-bionic-ovh-gra1-0004666718 
neutron-keepalived-state-change[1553]: 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change [-] Failed to get initial status of 
router cd300e6b-8222-4100-8f6a-3b5c4d5fe37b: FailedToDropPrivileges: privsep 
helper command exited non-zero (96)

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change Traceback (most recent call last):

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change   File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/agent/l3/keepalived_state_change.py",
 line 98, in handle_initial_state

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change for address in ip.addr.list():

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change   File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/agent/linux/ip_lib.py",
 line 540, in list

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change **kwargs)

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change   File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/agent/linux/ip_lib.py",
 line 1412, in get_devices_with_ip

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change devices = 
privileged.get_link_devices(namespace, **link_args)

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change   File 
"/home/zuul/src/git.openstack.org/openstack/neutron/.tox/dsvm-functional-python27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py",
 line 240, in _wrap

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change self.start()

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change   File 
"/home/zuul/src/git.openstack.org/openstack/neutron/.tox/dsvm-functional-python27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py",
 line 251, in start

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change channel = 
daemon.RootwrapClientChannel(context=self)

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change   File 
"/home/zuul/src/git.openstack.org/openstack/neutron/.tox/dsvm-functional-python27/local/lib/python2.7/site-packages/oslo_privsep/daemon.py",
 line 328, in __init__

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change raise FailedToDropPrivileges(msg)

 2019-04-03 09:19:09.778 1553 ERROR 
neutron.agent.l3.keepalived_state_change FailedToDropPrivileges: privsep helper 
command exited non-zero (96)


Example of such error: 
http://logs.openstack.org/25/645225/8/check/neutron-functional-python27/0704654/controller/logs/journal_log.txt.gz#_Apr_03_09_19_09

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure l3-dvr-backlog

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1823038

Title:
  Neutron-keepalived-state-change fails to check initial router state

Status in neutron:
  Confirmed

Bug description:
  As fix for bug https://bugs.launchpad.net/neutron/+bug/1818614 we
  added to neutron-keepalived-state-change monitor possibility to check
  initial status of router 

[Yahoo-eng-team] [Bug 1822960] Re: "delete l7 rule" Parameter Passing Error

2019-04-03 Thread Bernard Cafarelli
Same as other reported issue, neutron-lbaas issues are tracked on
storyboard, can you file the bug there? Thanks!

https://storyboard.openstack.org/#!/project/openstack/neutron-lbaas

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822960

Title:
  "delete l7 rule"  Parameter Passing Error

Status in neutron:
  Invalid

Bug description:
  Could not delete lbaas l7 rule from l7 policy properly.

  In plugin side, we tried to delete the records of l7 rules in the database by 
the following code
  "self.db.self.db.delete_l7policy_rule(context, id, l7policy_id)(context, id, 
l7policy_id)".

  However, in db side, the func delete_l7policy_rule is defined as
  " def delete_l7policy_rule(self, context, id):"

  Therefore, the parameter "l7policy_id", could not be handled.

  As a result, when delete lbaas l7 rule, the following mistakes will happen:
  "TypeError: delete_l7policy_rule() takes exactly 3 arguments (4 given)"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1822960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822968] Re: "lbaas delete l7 rule & policy" could leave dirty data

2019-04-03 Thread Bernard Cafarelli
neutron-lbaas issues are tracked on storyboard, can you file the bug
there?

https://storyboard.openstack.org/#!/project/openstack/neutron-lbaas

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822968

Title:
  "lbaas delete l7 rule & policy" could leave dirty data

Status in neutron:
  Invalid

Bug description:
  "lbaas delete l7 rule & policy" could leave dirty data

  In lbaas plugin side, the logic for deleting l7 rule and l7 policy is as 
follows:
  If the l7 rule(or policy) attached to a load balancer, plugin only calls 
drivers to delete data (without any database operations); otherwise plugin 
deletes the data in database.

  The codes are as follows:
  if l7policy_db.attached_to_loadbalancer():
  driver = self._get_driver_for_loadbalancer(
  context, l7policy_db.listener.loadbalancer_id)
  self._call_driver_operation(context, driver.l7policy.delete,
  l7policy_db)
  else:
  self.db.delete_l7policy(context, id)

  As a result:

  When try to delete the l7 rule(or policy) attached to a load balancer,
  dirty data will be left in the database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1822968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821015] Re: Attaching virtual GPU devices to guests in nova - libvirt reshaping

2019-04-03 Thread Matt Riedemann
** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Changed in: nova/stein
   Status: New => In Progress

** Changed in: nova/stein
   Importance: Undecided => Medium

** Changed in: nova/stein
 Assignee: (unassigned) => melanie witt (melwitt)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821015

Title:
  Attaching virtual GPU devices to guests in nova - libvirt reshaping

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) stein series:
  In Progress

Bug description:
  - [x] This is a doc addition request.

  This is coming up from discussion about what to document in the
  release notes for the libvirt VGPU reshaper work that happened in
  Stein:

  https://review.openstack.org/#/c/644412/4/releasenotes/notes/stein-
  prelude-b5fe92310e1e725e.yaml@57

  Specifically about the upgrade impact of VGPU inventory and
  allocations moving from the root compute node resource provider to a
  child resource provider.

  The release notes and docs for VGPUs are not very clear about this
  change (the docs don't actually mention anything about it). So this
  bug is for tracking that gap.

  I think it would be good to at the very least mention that starting in
  the Stein release, VGPU inventory and allocations are tracked on a
  child resource provider of the root compute node resource provider.
  And we could illustrate this using examples from openstack CLIs:

  https://docs.openstack.org/osc-placement/latest/index.html

  For example, list resource providers where there are two, one for the
  root compute node resource provider and one for a VGPU provider. Then
  list inventories on each and see there is VCPU/MEMORY_MB/DISK_GB on
  the compute node root provider and VGPU inventory on the child
  provider. Nothing fancy but something as a way to show, if you set
  this up correctly and things are working correctly, this is what you
  should expect to see.

  ---
  Release: 18.1.0.dev1645 on 2019-02-27 13:28:37
  SHA: 592658aafcb5de8d31b19c8833a8d41ca86a5654
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/virtual-gpu.rst
  URL: https://docs.openstack.org/nova/latest/admin/virtual-gpu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1821015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1823023] [NEW] l3 namespaces

2019-04-03 Thread YG Kumar
Public bug reported:

Hi,

We have a rocky OSA setup 18.1.4 version from git. Whenever we create a
router it is getting created and showing a compute node as host with the
command "l3-agent-list-hosting-router" .

But when we log into the compute node and check, there is no namespace for that 
router and sometimes
even though that namespace is created, when we do a ip netns exec 
qrouter-x ip a
that throws an error "Unable to find router with name or id 
'7ec2fa3057374a1584418124d5b879ca':

Also when we do a ip netns on the computes we see this :
- 
Error: Peer netns reference is invalid.
Error: Peer netns reference is invalid.
- 

The neutron.conf file on the computes:
-- 
# Ansible managed
# General, applies to all host groups
[DEFAULT]
debug = True
# Domain to use for building hostnames
dns_domain = vbg.example.cloud
## Rpc all
executor_thread_pool_size = 64
fatal_deprecations = False
l3_ha = False
log_file = /var/log/neutron/neutron.log
rpc_response_timeout = 60
transport_url = 
rabbit://neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.236.201:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.27:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.39:5671//neutron?ssl=1
# Disable stderr logging
use_stderr = False

# Agent
[agent]
polling_interval = 5
report_interval = 60
root_helper = sudo /openstack/venvs/neutron-18.1.4/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf

# Concurrency (locking mechanisms)
[oslo_concurrency]
lock_path = /var/lock/neutron

# Notifications
[oslo_messaging_notifications]
driver = messagingv2
notification_topics = notifications,notifications_designate
transport_url = 
rabbit://neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.236.201:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.27:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.39:5671//neutron?ssl=1

# Messaging
[oslo_messaging_rabbit]
rpc_conn_pool_size = 30
ssl = True
-- 




l3_agent.ini file
 
# Ansible managed

# General
[DEFAULT]
debug = True

# Drivers
interface_driver = openvswitch

agent_mode = legacy

# Conventional failover
allow_automatic_l3agent_failover = True

# HA failover
ha_confs_path = /var/lib/neutron/ha_confs
ha_vrrp_advert_int = 2
ha_vrrp_auth_password = 
ec86ebf62a85f387569ed0251dc7c8dd9e484949ba320a6ee6bf483758a318
ha_vrrp_auth_type = PASS

# Metadata
enable_metadata_proxy = True

# L3 plugins
 


Please help us with this issue.

Thanks
Y.G Kumar

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1823023

Title:
  l3 namespaces

Status in neutron:
  New

Bug description:
  Hi,

  We have a rocky OSA setup 18.1.4 version from git. Whenever we create
  a router it is getting created and showing a compute node as host with
  the command "l3-agent-list-hosting-router" .

  But when we log into the compute node and check, there is no namespace for 
that router and sometimes
  even though that namespace is created, when we do a ip netns exec 
qrouter-x ip a
  that throws an error "Unable to find router with name or id 
'7ec2fa3057374a1584418124d5b879ca':

  Also when we do a ip netns on the computes we see this :
  - 
  Error: Peer netns reference is invalid.
  Error: Peer netns reference is invalid.
  - 

  The neutron.conf file on the computes:
  -- 
  # Ansible managed
  # General, applies to all host groups
  [DEFAULT]
  debug = True
  # Domain to use for building hostnames
  dns_domain = vbg.example.cloud
  ## Rpc all
  executor_thread_pool_size = 64
  fatal_deprecations = False
  l3_ha = False
  log_file = /var/log/neutron/neutron.log
  rpc_response_timeout = 60
  transport_url = 
rabbit://neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.236.201:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.27:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.39:5671//neutron?ssl=1
  # Disable stderr logging
  use_stderr = False

  # Agent
  [agent]
  polling_interval = 5
  report_interval = 60
  root_helper = sudo /openstack/venvs/neutron-18.1.4/bin/neutron-rootwrap 
/etc/neutron/rootwrap.conf

  # Concurrency (locking mechanisms)
  [oslo_concurrency]
  lock_path = /var/lock/neutron

  # Notifications
  [oslo_messaging_notifications]
  driver = messagingv2
  notification_topics = notifications,notifications_designate
  transport_url = 
rabbit://neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.236.201:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.27:5671,neutron:6a5c9d9634190b954f133f274d793be4d2@172.29.239.39:5671//neutron?ssl=1

  # Messaging
  [oslo_messaging_rabbit]
  rpc_conn_pool_size = 30
  ssl = True
  -- 







  l3_agent.ini file
   
  # Ansible managed

  # General
  [DEFAULT]
  debug = True

  # Drivers
  

[Yahoo-eng-team] [Bug 1775250] Re: Implement DVR-aware announcement of fixed IP's in neutron-dynamic-routing

2019-04-03 Thread Ryan Tidwell
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775250

Title:
  Implement DVR-aware announcement of fixed IP's in neutron-dynamic-
  routing

Status in neutron:
  Fix Released

Bug description:
  The current implementation of neutron-dynamic-routing is compatible
  with DVR, but is not optimized for DVR. It currently announces next-
  hops for all tenant subnets through the central router on the network
  node. Announcing next-hops via the FIP gateway on the compute node was
  never implemented due to the pre-requisite of having DVR fast-exit in
  place for packets to be routed through the FIP namespace properly.
  With DVR fast-exit now in place, it's time to consider adding DVR-
  aware /32 and/or /128 announcements for fixed IP's using the FIP
  gateway as the next-hop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775250/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822999] Re: create_port_bulk is broken

2019-04-03 Thread YAMAMOTO Takashi
i added neutron because the change seems unintended.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Summary changed:

- create_port_bulk is broken
+ ml2 create_port_bulk produces incompatible port context

** Summary changed:

- ml2 create_port_bulk produces incompatible port context
+ ml2 create_port_bulk produces incompatible port context to drivers

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822999

Title:
  ml2 create_port_bulk produces incompatible port context to drivers

Status in networking-midonet:
  New
Status in neutron:
  New

Bug description:
  recently the following tests are failing on the networking-midonet
  gate.

  test_create_bulk_port
  test_bulk_create_delete_port

  probably due to https://review.openstack.org/#/c/624815/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1822999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822986] [NEW] Not clear if www__authenticate_uri is really needed

2019-04-03 Thread massimo.sgaravatto
Public bug reported:

I am validating a small OpenStack Rocky installation.
The nova part seems working but I noticed this warning in the nova log files:

Configuring www_authenticate_uri to point to the public identity endpoint is 
required; clients may\
 not be able to authenticate against an admin endpoint


Indeed I didn't set the attribute, since it is not mentioned in the Rocky 
installation guide.


If it is really required:
- I think it should be mentioned in the installation guide
- The nova services shouldn't start if it is not defined (also because 
according to the confi guide it has not a default)

If it is not required, the warning message is not very clear IMHO

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1822986

Title:
  Not clear if www__authenticate_uri is really needed

Status in OpenStack Compute (nova):
  New

Bug description:
  I am validating a small OpenStack Rocky installation.
  The nova part seems working but I noticed this warning in the nova log files:

  Configuring www_authenticate_uri to point to the public identity endpoint is 
required; clients may\
   not be able to authenticate against an admin endpoint

  
  Indeed I didn't set the attribute, since it is not mentioned in the Rocky 
installation guide.

  
  If it is really required:
  - I think it should be mentioned in the installation guide
  - The nova services shouldn't start if it is not defined (also because 
according to the confi guide it has not a default)

  If it is not required, the warning message is not very clear IMHO

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1822986/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1713783] Re: After failed evacuation the recovered source compute tries to delete the instance

2019-04-03 Thread Pavel Glazov
Verified:
Instance is set to error status after a new evacuation. Instance is not deleted.

** Changed in: fuel
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1713783

Title:
  After failed evacuation the recovered source compute tries to delete
  the instance

Status in Fuel for OpenStack:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) ocata series:
  Fix Committed
Status in OpenStack Compute (nova) pike series:
  Fix Committed
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Description
  ===
  In case of a failed evacuation attempt the status of the migration is 
'accepted' instead of 'failed' so when source compute is recovered the compute 
manager tries to delete the instance from the source host. However a secondary 
fault prevents deleting the allocation in placement so the actual deletion of 
the instance fails as well.

  Steps to reproduce
  ==
  The following functional test reproduces the bug: 
https://review.openstack.org/#/c/498482/
  What it does: initiate evacuation when no valid host is available and 
evacuation fails, but nova manager still tries to delete the instance.
  Logs:

  2017-08-29 19:11:15,751 ERROR [oslo_messaging.rpc.server] Exception 
during message handling
  NoValidHost: No valid host was found. There are not enough hosts 
available.
  2017-08-29 19:11:16,103 INFO [nova.tests.functional.test_servers] Running 
periodic for compute1 (host1)
  2017-08-29 19:11:16,115 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,120 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,131 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/allocations" 
status: 200 len: 152 microversion: 1.0
  2017-08-29 19:11:16,138 INFO [nova.compute.resource_tracker] Final 
resource view: name=host1 phys_ram=8192MB used_ram=1024MB phys_disk=1028GB 
used_disk=1GB total_vcpus=10 used_vcpus=1 pci_stats=[]
  2017-08-29 19:11:16,146 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,151 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/4e8e23ff-0c52-4cf7-8356-d9fa88536316/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,152 INFO [nova.tests.functional.test_servers] Running 
periodic for compute2 (host2)
  2017-08-29 19:11:16,163 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,168 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,176 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/allocations" 
status: 200 len: 54 microversion: 1.0
  2017-08-29 19:11:16,184 INFO [nova.compute.resource_tracker] Final 
resource view: name=host2 phys_ram=8192MB used_ram=512MB phys_disk=1028GB 
used_disk=0GB total_vcpus=10 used_vcpus=0 pci_stats=[]
  2017-08-29 19:11:16,192 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/aggregates" 
status: 200 len: 18 microversion: 1.1
  2017-08-29 19:11:16,197 INFO [nova.api.openstack.placement.requestlog] 
127.0.0.1 "GET 
/placement/resource_providers/531b1ce8-def1-455d-95b3-4140665d956f/inventories" 
status: 200 len: 401 microversion: 1.0
  2017-08-29 19:11:16,198 INFO [nova.tests.functional.test_servers] 
Finished with periodics
  2017-08-29 19:11:16,255 INFO [nova.api.openstack.requestlog] 127.0.0.1 
"GET 
/v2.1/6f70656e737461636b20342065766572/servers/5058200c-478e-4449-88c1-906fdd572662"
 status: 200 len: 1875 microversion: 2.53 time: 0.056198
  2017-08-29 19:11:16,262 INFO [nova.api.openstack.requestlog] 127.0.0.1 
"GET /v2.1/6f70656e737461636b20342065766572/os-migrations" status: 200 len: 373 
microversion: 2.53 time: 0.004618
  2017-08-29 19:11:16,280 INFO [nova.api.openstack.requestlog] 127.0.0.1 
"PUT 

[Yahoo-eng-team] [Bug 1822970] [NEW] When editing host aggregation, if the available domain is deleted, two prompts will appear, one success message and one failure message

2019-04-03 Thread pengyuesheng
Public bug reported:

When editing host aggregation, if the available domain is deleted, two
prompts will appear, one success message and one failure message

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1822970

Title:
  When editing host aggregation, if the available domain is deleted, two
  prompts will appear, one success message and one failure message

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When editing host aggregation, if the available domain is deleted, two
  prompts will appear, one success message and one failure message

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1822970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822971] [NEW] When editing host aggregation, if the available domain is deleted, two prompts will appear, one success message and one failure message

2019-04-03 Thread pengyuesheng
Public bug reported:

When editing host aggregation, if the available domain is deleted, two
prompts will appear, one success message and one failure message

** Affects: horizon
 Importance: Undecided
 Assignee: pengyuesheng (pengyuesheng)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => pengyuesheng (pengyuesheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1822971

Title:
  When editing host aggregation, if the available domain is deleted, two
  prompts will appear, one success message and one failure message

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When editing host aggregation, if the available domain is deleted, two
  prompts will appear, one success message and one failure message

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1822971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822968] [NEW] "lbaas delete l7 rule & policy" could leave dirty data

2019-04-03 Thread Yue Qu
Public bug reported:

"lbaas delete l7 rule & policy" could leave dirty data

In lbaas plugin side, the logic for deleting l7 rule and l7 policy is as 
follows:
If the l7 rule(or policy) attached to a load balancer, plugin only calls 
drivers to delete data (without any database operations); otherwise plugin 
deletes the data in database.

The codes are as follows:
if l7policy_db.attached_to_loadbalancer():
driver = self._get_driver_for_loadbalancer(
context, l7policy_db.listener.loadbalancer_id)
self._call_driver_operation(context, driver.l7policy.delete,
l7policy_db)
else:
self.db.delete_l7policy(context, id)

As a result:

When try to delete the l7 rule(or policy) attached to a load balancer,
dirty data will be left in the database.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822968

Title:
  "lbaas delete l7 rule & policy" could leave dirty data

Status in neutron:
  New

Bug description:
  "lbaas delete l7 rule & policy" could leave dirty data

  In lbaas plugin side, the logic for deleting l7 rule and l7 policy is as 
follows:
  If the l7 rule(or policy) attached to a load balancer, plugin only calls 
drivers to delete data (without any database operations); otherwise plugin 
deletes the data in database.

  The codes are as follows:
  if l7policy_db.attached_to_loadbalancer():
  driver = self._get_driver_for_loadbalancer(
  context, l7policy_db.listener.loadbalancer_id)
  self._call_driver_operation(context, driver.l7policy.delete,
  l7policy_db)
  else:
  self.db.delete_l7policy(context, id)

  As a result:

  When try to delete the l7 rule(or policy) attached to a load balancer,
  dirty data will be left in the database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1822968/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1822960] [NEW] "delete l7 rule" Parameter Passing Error

2019-04-03 Thread Yue Qu
Public bug reported:

Could not delete lbaas l7 rule from l7 policy properly.

In plugin side, we tried to delete the records of l7 rules in the database by 
the following code
"self.db.self.db.delete_l7policy_rule(context, id, l7policy_id)(context, id, 
l7policy_id)".

However, in db side, the func delete_l7policy_rule is defined as
" def delete_l7policy_rule(self, context, id):"

Therefore, the parameter "l7policy_id", could not be handled.

As a result, when delete lbaas l7 rule, the following mistakes will happen:
"TypeError: delete_l7policy_rule() takes exactly 3 arguments (4 given)"

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1822960

Title:
  "delete l7 rule"  Parameter Passing Error

Status in neutron:
  New

Bug description:
  Could not delete lbaas l7 rule from l7 policy properly.

  In plugin side, we tried to delete the records of l7 rules in the database by 
the following code
  "self.db.self.db.delete_l7policy_rule(context, id, l7policy_id)(context, id, 
l7policy_id)".

  However, in db side, the func delete_l7policy_rule is defined as
  " def delete_l7policy_rule(self, context, id):"

  Therefore, the parameter "l7policy_id", could not be handled.

  As a result, when delete lbaas l7 rule, the following mistakes will happen:
  "TypeError: delete_l7policy_rule() takes exactly 3 arguments (4 given)"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1822960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816755] Re: Flavor id and name validation are unnecessary because of jsonschema validation

2019-04-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/638150
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=c19c3d976019027586080cf0958aff41a373f701
Submitter: Zuul
Branch:master

commit c19c3d976019027586080cf0958aff41a373f701
Author: 翟小君 
Date:   Wed Feb 20 21:21:27 2019 +0800

Remove flavor id and name validation code

Remove flavor id and name validation code because of
jsonschema validation.The jsonschema validation was
added with Ieba96718264ad2ddfba63b65425f7e5bbb8606a9.

Closes-Bug: #1816755

Change-Id: Id6702180a4af6f9f7851a2b912e6d6adeccf90df


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1816755

Title:
  Flavor id and name validation are unnecessary because of jsonschema
  validation

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Flavor id and name validation code in nova/nova/compute/flavors.py
  function create() is unreachable because of jsonschema validation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1816755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp