[Yahoo-eng-team] [Bug 1815029] [NEW] Missing Project ID information in detail page

2019-02-07 Thread Vishal Manchanda
Public bug reported:

Many detail view pages like instances,images,Volumes Snapshots are
missing Project ID information.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1815029

Title:
  Missing Project ID information in detail page

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Many detail view pages like instances,images,Volumes Snapshots are
  missing Project ID information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1815029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691047] Re: dhcp agent - multiple interfaces, last iface coming up overwrite resolv.conf

2019-02-07 Thread Dr. Jens Harbott
*** This bug is a duplicate of bug 1311040 ***
https://bugs.launchpad.net/bugs/1311040

** This bug has been marked a duplicate of bug 1311040
   Subnet option to disable dns server

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691047

Title:
  dhcp agent - multiple interfaces, last iface coming up overwrite
  resolv.conf

Status in neutron:
  New

Bug description:
  The resolv.conf gets populated with whatever the last interface that
  came up over DHCP provided.

  Even if the 2nd network/subnet in neutron doesn’t define DNS, it still
  overwrites resolv.conf.

  By default the dnsmasq agent will use itself, and it's pairs as DNS servers 
if no dns_servers are provided for the neutron subnet. Ref:

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/dhcp.py#L877:L887

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/dhcp.py#L970

  This is not always desired. Is there a way to disable this behaviour,
  and simply not offer any dns servers if there are none specified in
  the neutron subnet?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1691047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815030] [NEW] FreeBSD: Unable to determine distribution

2019-02-07 Thread do3meli
Public bug reported:

The util.py currently is not so FreeBSD friendly as it always prints the
following warnings:

2019-02-07 11:02:02,324 - util.py[WARNING]: Unable to determine
distribution, template expansion may have unexpected results

This is obviously getting printed for each stage and most likely due to
the fact that the get_linux_distro() function is getting called for
FreeBSD.

We might should change that function so it will also handle the FreeBSD
case and not fail. I did not further trace this back so it might be that
it should not even call the get_linux_distro() and use another method as
it is not really a Linux.

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: freebsd

** Tags added: freebsd

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1815030

Title:
  FreeBSD: Unable to determine distribution

Status in cloud-init:
  New

Bug description:
  The util.py currently is not so FreeBSD friendly as it always prints
  the following warnings:

  2019-02-07 11:02:02,324 - util.py[WARNING]: Unable to determine
  distribution, template expansion may have unexpected results

  This is obviously getting printed for each stage and most likely due
  to the fact that the get_linux_distro() function is getting called for
  FreeBSD.

  We might should change that function so it will also handle the
  FreeBSD case and not fail. I did not further trace this back so it
  might be that it should not even call the get_linux_distro() and use
  another method as it is not really a Linux.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1815030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539640] Re: Make dhcp agent not recycle ports in binding_failed state

2019-02-07 Thread Dr. Jens Harbott
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1539640

Title:
  Make dhcp agent not recycle ports in binding_failed state

Status in neutron:
  Fix Released

Bug description:
  We just happened to have broken dhcp because the dhcp ports were in
  binding_failed state.

  We tried to disable/enable dhcp again, but apparently (not 100% sure,
  unfortunately), disabling didn't remove the ports (assuming that it
  was because they were in binding_failed state) and re-enabling just
  ended up with ports in binding_failed states again (likely the same
  ports because dhcp agent tries to recycle ports whenever possible).
  Unfortunately, I don't have the logs / traces of this.

  So disabling / re-enabling didn't fix anything while, I believe, users
  would try that first to fix the situation. If we could make that just
  work, then that would make it easier for people to get out of this "no
  dhcp" problem when ports are failing to bind.

  In the end, we had to remove the ports to have new ports created when
  dhcp was disable / re-enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1539640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1814913] Re: A new instance_mapping record will have queued_for_delete set to NULL

2019-02-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/635185
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ccec9ba82de7c9525981a34bb126e9ca98042d04
Submitter: Zuul
Branch:master

commit ccec9ba82de7c9525981a34bb126e9ca98042d04
Author: Dan Smith 
Date:   Wed Feb 6 06:54:00 2019 -0800

Fix InstanceMapping to always default queued_for_delete=False

This object has a default=False setting for queued_for_delete, but never
actually sets that value. All newly created records should have a non-NULL
value for this field, and we have a migration to fix them, so this
change explicitly forces that =False, unless the object is being created
with a value set.

Closes-Bug: #1814913
Change-Id: I99c5cc24c7e9bf5e2e72ffc868990b87b0e8e3f8


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1814913

Title:
  A new instance_mapping record will have queued_for_delete set to NULL

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Confirmed

Bug description:
  After this change: https://review.openstack.org/#/c/584504, where we
  changed the default value of queued_for_delete column from False to
  NULL in the sqla code for the instance_mappings object (to do the data
  migration for queued_for_delete), we forgot to set the default value
  as False upon creation of the new instance_mappings in the create()
  method. Hence new instance_mappings always ended up with NULL values
  in the db meaning the data migration "populate_queued_for_delete"
  would never finish.

  So in the InstanceMapping.create() function queued_for_delete should
  always be set to False() explicitly so that new mappings get False as
  the default value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1814913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815051] [NEW] Bionic netplan render invalid yaml duplicate anchor declaration for nameserver entries

2019-02-07 Thread Márton Kiss
Public bug reported:

The netplan configuration redeclares the nameservers anchor for every
single section (vlans, bonds), and use the same id for similar entries
(id001).

In this specific case the network configuration in maas have a bond0
with two vlans, bond0.3502 and bond0.3503, and an untagged bond1 without
vlans. The rendered 50-cloud-init.yaml looks like this:

network:
version: 2
ethernets:
...
bonds:
...
bond1:
...
nameservers: &id001 <- anchor declaration here
addresses:
- 255.255.255.1
- 255.255.255.2
- 255.255.255.3
- 255.255.255.5
search:
- customer.domain
- maas
...
bondM:
...
nameservers: *id001

   vlans:
bond0.3502:
...
nameservers: &id001 <- anchor redeclaration here
addresses:
- 255.255.255.1
- 255.255.255.2
- 255.255.255.3
- 255.255.255.5
search:
- customer.domain
- maas
bond0.3503:
...
nameservers: *id001

As the cloudinit renders an invalid yaml file, the netplan apply
produces the following error: (due to the anchor redeclaration in the
vlans section):

   Invalid YAML at /etc/netplan/50-cloud-init.yaml line 118 column 25:
second occurence

This render bug prevents us using the untagged bond and the bond with
the vlans in the same configuration.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1815051

Title:
  Bionic netplan render invalid yaml duplicate anchor declaration for
  nameserver entries

Status in cloud-init:
  New

Bug description:
  The netplan configuration redeclares the nameservers anchor for every
  single section (vlans, bonds), and use the same id for similar entries
  (id001).

  In this specific case the network configuration in maas have a bond0
  with two vlans, bond0.3502 and bond0.3503, and an untagged bond1
  without vlans. The rendered 50-cloud-init.yaml looks like this:

  network:
  version: 2
  ethernets:
  ...
  bonds:
  ...
  bond1:
  ...
  nameservers: &id001 <- anchor declaration here
  addresses:
  - 255.255.255.1
  - 255.255.255.2
  - 255.255.255.3
  - 255.255.255.5
  search:
  - customer.domain
  - maas
  ...
  bondM:
  ...
  nameservers: *id001

 vlans:
  bond0.3502:
  ...
  nameservers: &id001 <- anchor redeclaration here
  addresses:
  - 255.255.255.1
  - 255.255.255.2
  - 255.255.255.3
  - 255.255.255.5
  search:
  - customer.domain
  - maas
  bond0.3503:
  ...
  nameservers: *id001

  As the cloudinit renders an invalid yaml file, the netplan apply
  produces the following error: (due to the anchor redeclaration in the
  vlans section):

 Invalid YAML at /etc/netplan/50-cloud-init.yaml line 118 column 25:
  second occurence

  This render bug prevents us using the untagged bond and the bond with
  the vlans in the same configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1815051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1814859] Re: Neutron quotas for firewall params fail to update

2019-02-07 Thread Slawek Kaplonski
** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1814859

Title:
  Neutron quotas for firewall params fail to update

Status in python-neutronclient:
  New

Bug description:
  While attempting to change the FWaaS quota params on a FWaaSv2
  environment, the update operation fails.

  See outputs below:
  $ neutron quota-show
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  +---+---+
  | Field | Value |
  +---+---+
  | endpoint_group| -1|
  | firewall_group| -1|
  | firewall_policy   | 10|
  | firewall_rule | 100   |
  | floatingip| 50|
  | healthmonitor | -1|
  | housekeeper   | -1|
  | ikepolicy | -1|
  | ipsec_site_connection | -1|
  | ipsecpolicy   | -1|
  | l2-gateway-connection | -1|
  | l7policy  | -1|
  | listener  | -1|
  | loadbalancer  | 10|
  | member| -1|
  | network   | 100   |
  | pool  | 10|
  | port  | 500   |
  | rbac_policy   | 10|
  | router| 10|
  | security_group| 10|
  | security_group_rule   | 100   |
  | subnet| 100   |
  | subnetpool| -1|
  | vpnservice| -1|
  +---+---+

  $ neutron quota-update --tenant-id 8c2d97bf3d0047959ff4cf57dc5ac410 
--firewall-rule 200
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Must specify a valid resource with new quota value

  $ neutron quota-update --tenant-id 8c2d97bf3d0047959ff4cf57dc5ac410 
--firewall-policy 100
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Must specify a valid resource with new quota value

  $ neutron quota-update --tenant-id 8c2d97bf3d0047959ff4cf57dc5ac410 
--firewall_group 200
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Must specify a valid resource with new quota value

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1814859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771506] Re: Unit test failure with OpenSSL 1.1.1

2019-02-07 Thread Corey Bryant
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771506

Title:
  Unit test failure with OpenSSL 1.1.1

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in Ubuntu Cloud Archive rocky series:
  Triaged
Status in Ubuntu Cloud Archive stein series:
  Triaged
Status in OpenStack Compute (nova):
  In Progress
Status in nova package in Ubuntu:
  Triaged
Status in nova source package in Bionic:
  Triaged
Status in nova source package in Cosmic:
  Triaged
Status in nova source package in Disco:
  Triaged

Bug description:
  Hi,

  Building the Nova Queens package with OpenSSL 1.1.1 leads to unit test
  problems. This was reported to Debian at:
  https://bugs.debian.org/898807

  The new openssl 1.1.1 is currently in experimental [0]. This package
  failed to build against this new package [1] while it built fine
  against the openssl version currently in unstable [2]. Could you
  please have a look?

  FAIL: 
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_newlines_inside_message
  
|nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_newlines_inside_message
  |--
  |_StringException: pythonlogging:'': {{{2018-05-01 20:48:09,960 WARNING 
[oslo_config.cfg] Config option key_manager.api_class  is deprecated. Use 
option key_manager.backend instead.}}}
  |
  |Traceback (most recent call last):
  |  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1592, in test_encrypt_newlines_inside_message
  |self._test_encryption('Message\nwith\ninterior\nnewlines.')
  |  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1577, in _test_encryption
  |enc = self.alice.encrypt(message)
  |  File "/<>/nova/virt/xenapi/agent.py", line 432, in encrypt
  |return self._run_ssl(text).strip('\n')
  |  File "/<>/nova/virt/xenapi/agent.py", line 428, in _run_ssl
  |raise RuntimeError(_('OpenSSL error: %s') % err)
  |RuntimeError: OpenSSL error: *** WARNING : deprecated key derivation used.
  |Using -iter or -pbkdf2 would be better.

  It looks like due to additional message on stderr.

  [0] https://lists.debian.org/msgid-search/20180501211400.ga21...@roeckx.be
  [1] 
https://breakpoint.cc/openssl-rebuild/2018-05-03-rebuild-openssl1.1.1-pre6/attempted/nova_17.0.0-4_amd64-2018-05-01T20%3A39%3A38Z
  [2] 
https://breakpoint.cc/openssl-rebuild/2018-05-03-rebuild-openssl1.1.1-pre6/successful/nova_17.0.0-4_amd64-2018-05-02T18%3A46%3A36Z

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1771506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771506] Re: Unit test failure with OpenSSL 1.1.1

2019-02-07 Thread Corey Bryant
@xnox, thanks for the patch. I've submitted it to the upstream master
branch. Once that lands I'll start backporting to stable branches and
Ubuntu.

** Also affects: nova (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Changed in: nova (Ubuntu Bionic)
   Status: New => Triaged

** Changed in: nova (Ubuntu Bionic)
   Importance: Undecided => High

** Changed in: nova (Ubuntu Cosmic)
   Importance: Undecided => High

** Changed in: nova (Ubuntu Cosmic)
   Status: New => Triaged

** Changed in: nova (Ubuntu Disco)
   Importance: Undecided => High

** Changed in: nova (Ubuntu Disco)
   Status: New => Triaged

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/queens
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/stein
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/rocky
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/queens
   Status: New => Triaged

** Changed in: cloud-archive/stein
   Status: New => Triaged

** Changed in: cloud-archive/queens
   Importance: Undecided => High

** Changed in: cloud-archive/rocky
   Status: New => Triaged

** Changed in: cloud-archive/stein
   Importance: Undecided => High

** Changed in: cloud-archive/rocky
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771506

Title:
  Unit test failure with OpenSSL 1.1.1

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive queens series:
  Triaged
Status in Ubuntu Cloud Archive rocky series:
  Triaged
Status in Ubuntu Cloud Archive stein series:
  Triaged
Status in OpenStack Compute (nova):
  In Progress
Status in nova package in Ubuntu:
  Triaged
Status in nova source package in Bionic:
  Triaged
Status in nova source package in Cosmic:
  Triaged
Status in nova source package in Disco:
  Triaged

Bug description:
  Hi,

  Building the Nova Queens package with OpenSSL 1.1.1 leads to unit test
  problems. This was reported to Debian at:
  https://bugs.debian.org/898807

  The new openssl 1.1.1 is currently in experimental [0]. This package
  failed to build against this new package [1] while it built fine
  against the openssl version currently in unstable [2]. Could you
  please have a look?

  FAIL: 
nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_newlines_inside_message
  
|nova.tests.unit.virt.xenapi.test_xenapi.XenAPIDiffieHellmanTestCase.test_encrypt_newlines_inside_message
  |--
  |_StringException: pythonlogging:'': {{{2018-05-01 20:48:09,960 WARNING 
[oslo_config.cfg] Config option key_manager.api_class  is deprecated. Use 
option key_manager.backend instead.}}}
  |
  |Traceback (most recent call last):
  |  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1592, in test_encrypt_newlines_inside_message
  |self._test_encryption('Message\nwith\ninterior\nnewlines.')
  |  File "/<>/nova/tests/unit/virt/xenapi/test_xenapi.py", line 
1577, in _test_encryption
  |enc = self.alice.encrypt(message)
  |  File "/<>/nova/virt/xenapi/agent.py", line 432, in encrypt
  |return self._run_ssl(text).strip('\n')
  |  File "/<>/nova/virt/xenapi/agent.py", line 428, in _run_ssl
  |raise RuntimeError(_('OpenSSL error: %s') % err)
  |RuntimeError: OpenSSL error: *** WARNING : deprecated key derivation used.
  |Using -iter or -pbkdf2 would be better.

  It looks like due to additional message on stderr.

  [0] https://lists.debian.org/msgid-search/20180501211400.ga21...@roeckx.be
  [1] 
https://breakpoint.cc/openssl-rebuild/2018-05-03-rebuild-openssl1.1.1-pre6/attempted/nova_17.0.0-4_amd64-2018-05-01T20%3A39%3A38Z
  [2] 
https://breakpoint.cc/openssl-rebuild/2018-05-03-rebuild-openssl1.1.1-pre6/successful/nova_17.0.0-4_amd64-2018-05-02T18%3A46%3A36Z

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1771506/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1814953] Re: Compute API in nova - bad link in 'create extra specs for flavor' API reference

2019-02-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/635252
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=46a9d73ad868d09377c739d6f1ed4758f74bb941
Submitter: Zuul
Branch:master

commit 46a9d73ad868d09377c739d6f1ed4758f74bb941
Author: Matt Riedemann 
Date:   Wed Feb 6 14:46:47 2019 -0500

api-ref: fix link to flavor extra specs docs

This fixes the link, re-words it a bit, moves it to the main
description (since it applies to PUT also) and drops the note
since we don't need note formatting for linking in reference
material.

Closes-Bug: #1814953

Change-Id: Ia24cda353bdcadf3fe8405aac588e8abf1100608


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1814953

Title:
  Compute API in nova - bad link in 'create extra specs for flavor' API
  reference

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  - [x] This doc is inaccurate in this way:

  The link here:

  https://developer.openstack.org/api-ref/compute/#create-extra-specs-
  for-a-flavor

  To the compute flavors extra spec guide is broken. Those docs live
  here now:

  https://docs.openstack.org/nova/latest/user/flavors.html#extra-specs

  ---
  Release: 18.1.0.dev1168 on 2018-12-17 05:51:03
  SHA: b01da49dfc38057a751cb59f4a7a99dd7f20b6ff
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/api-ref/source/index.rst
  URL: https://developer.openstack.org/api-ref/compute/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1814953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815082] [NEW] "DBNonExistentTable: (sqlite3.OperationalError) no such table: services" when starting nova-metadata under uwsgi

2019-02-07 Thread Matt Riedemann
Public bug reported:

Jens Harbott reported this in devstack:

https://review.openstack.org/#/c/635519/

He's running the n-api-meta service in the subnode per the multinode
guide:

https://docs.openstack.org/devstack/latest/guides/multinode-lab.html

However with current devstack, which doesn't configure nova.conf with
database access on the subnode, n-api-meta fails with this:

http://paste.openstack.org/show/744683/

Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: CRITICAL 
nova [None req-538e6b11-f91b-48d8-9c7a-d177dff7739b None None] Unhandled error: 
DBNonExistentTable: (sqlite3.OperationalError) no such t
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
Traceback (most recent call last):
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/usr/local/bin/nova-metadata-wsgi", line 52, in 
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  application = init_application()
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/opt/stack/nova/nova/api/metadata/wsgi.py", line 20, in init_application
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  return wsgi_app.init_application(NAME)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/opt/stack/nova/nova/api/openstack/wsgi_app.py", line 82, in 
init_application
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  _setup_service(CONF.host, name)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/opt/stack/nova/nova/api/openstack/wsgi_app.py", line 49, in 
_setup_service
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  ctxt, host, binary)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", 
line 184, in wrapper
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  result = fn(cls, context, *args, **kwargs)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/opt/stack/nova/nova/objects/service.py", line 334, in 
get_by_host_and_binary
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  host, binary)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/opt/stack/nova/nova/db/api.py", line 127, in 
service_get_by_host_and_binary
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  return IMPL.service_get_by_host_and_binary(context, host, binary)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 242, in wrapped
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  return f(context, *args, **kwargs)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 500, in 
service_get_by_host_and_binary
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  filter_by(binary=binary).\
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 
2979, in first
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  ret = list(self[0:1])
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 
2771, in __getitem__
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  return list(res)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 
3081, in __iter__
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  return self._execute_and_instances(context)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 
3106, in _execute_and_instances
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  result = conn.execute(querycontext.statement, self._params)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
980, in execute
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
  return meth(self, multiparams, params)
Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova   
File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/elements.py", line 
273, in _execute_on_connection
Feb 07 13:33:32 jh-devstack01a devstack@n-

[Yahoo-eng-team] [Bug 1815082] Re: "DBNonExistentTable: (sqlite3.OperationalError) no such table: services" when starting nova-metadata under uwsgi

2019-02-07 Thread Matt Riedemann
** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Triaged

** Changed in: nova/queens
   Status: New => Triaged

** Changed in: nova/pike
   Importance: Undecided => Medium

** Changed in: nova/rocky
   Status: New => Triaged

** Changed in: nova/queens
   Importance: Undecided => Medium

** Changed in: nova/rocky
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815082

Title:
  "DBNonExistentTable: (sqlite3.OperationalError) no such table:
  services" when starting nova-metadata under uwsgi

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) pike series:
  Triaged
Status in OpenStack Compute (nova) queens series:
  Triaged
Status in OpenStack Compute (nova) rocky series:
  Triaged

Bug description:
  Jens Harbott reported this in devstack:

  https://review.openstack.org/#/c/635519/

  He's running the n-api-meta service in the subnode per the multinode
  guide:

  https://docs.openstack.org/devstack/latest/guides/multinode-lab.html

  However with current devstack, which doesn't configure nova.conf with
  database access on the subnode, n-api-meta fails with this:

  http://paste.openstack.org/show/744683/

  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: CRITICAL 
nova [None req-538e6b11-f91b-48d8-9c7a-d177dff7739b None None] Unhandled error: 
DBNonExistentTable: (sqlite3.OperationalError) no such t
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
Traceback (most recent call last):
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/usr/local/bin/nova-metadata-wsgi", line 52, in 
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
application = init_application()
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/opt/stack/nova/nova/api/metadata/wsgi.py", line 20, in init_application
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
return wsgi_app.init_application(NAME)
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/opt/stack/nova/nova/api/openstack/wsgi_app.py", line 82, in 
init_application
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
_setup_service(CONF.host, name)
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/opt/stack/nova/nova/api/openstack/wsgi_app.py", line 49, in 
_setup_service
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
ctxt, host, binary)
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", 
line 184, in wrapper
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
result = fn(cls, context, *args, **kwargs)
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/opt/stack/nova/nova/objects/service.py", line 334, in 
get_by_host_and_binary
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
host, binary)
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/opt/stack/nova/nova/db/api.py", line 127, in 
service_get_by_host_and_binary
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
return IMPL.service_get_by_host_and_binary(context, host, binary)
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 242, in wrapped
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
return f(context, *args, **kwargs)
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/opt/stack/nova/nova/db/sqlalchemy/api.py", line 500, in 
service_get_by_host_and_binary
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
filter_by(binary=binary).\
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 
2979, in first
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
ret = list(self[0:1])
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 
2771, in __getitem__
  Feb 07 13:33:32 jh-devstack01a devstack@n-api-meta.service[22651]: ERROR nova 
return list(res

[Yahoo-eng-team] [Bug 1815109] [NEW] cloud-init modules --mode final exit with "KeyError: 'modules-init'" after upgrade to version 18.2

2019-02-07 Thread Antonio Romito
Public bug reported:

Description of problem:

After the upgrade of cloud-init to version 18.2 cloud-final.service do
not start due to the following error and the service remains in not
running state

-
# service cloud-final status
Redirecting to /bin/systemctl status cloud-final.service
● cloud-final.service - Execute cloud user/final scripts
   Loaded: loaded (/usr/lib/systemd/system/cloud-final.service; enabled; vendor 
preset: disabled)
   Active: failed (Result: exit-code) since Fri 2019-02-01 13:14:31 CET; 28min 
ago
  Process: 21927 ExecStart=/usr/bin/cloud-init modules --mode=final 
(code=exited, status=1/FAILURE)
 Main PID: 21927 (code=exited, status=1/FAILURE)
-

Version-Release number of selected component (if applicable):

Red Hat Enterprise Linux Server release 7.6 (Maipo)
cloud-init-18.2-1.el7_6.1.x86_64

How reproducible:

Steps to Reproduce:
1. [root@rhvm ~]# cloud-init modules --mode=final

Actual results:

[root@rhvm ~]# cloud-init modules --mode final
Cloud-init v. 18.2 running 'modules:final' at Wed, 06 Feb 2019 20:00:14 +. 
Up 10634.29 seconds.
Cloud-init v. 18.2 finished at Wed, 06 Feb 2019 20:00:15 +. Datasource 
DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 10634.40 seconds
Traceback (most recent call last):
  File "/usr/bin/cloud-init", line 9, in 
load_entry_point('cloud-init==18.2', 'console_scripts', 'cloud-init')()
  File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 882, in 
main
get_uptime=True, func=functor, args=(name, args))
  File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 2388, in 
log_time
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 679, in 
status_wrapper
if v1[m]['errors']:
KeyError: 'modules-init'


Expected results:

[root@rhvm ~]# cloud-init modules --mode final
Cloud-init v. 18.2 running 'modules:final' at Wed, 06 Feb 2019 19:41:50 +. 
Up 9530.23 seconds.
Cloud-init v. 18.2 finished at Wed, 06 Feb 2019 19:41:50 +. Datasource 
DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 9530.34 seconds


Additional info:

This problem do not happens with previous cloud-init version:

cloud-init.x86_64 0:0.7.9-24.el7_5.1 will be updated

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Merge proposal linked:
   https://code.launchpad.net/~aromito/cloud-init/+git/cloud-init/+merge/362878

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1815109

Title:
  cloud-init modules --mode final exit with "KeyError: 'modules-init'"
  after upgrade to version 18.2

Status in cloud-init:
  New

Bug description:
  Description of problem:

  After the upgrade of cloud-init to version 18.2 cloud-final.service do
  not start due to the following error and the service remains in not
  running state

  -
  # service cloud-final status
  Redirecting to /bin/systemctl status cloud-final.service
  ● cloud-final.service - Execute cloud user/final scripts
 Loaded: loaded (/usr/lib/systemd/system/cloud-final.service; enabled; 
vendor preset: disabled)
 Active: failed (Result: exit-code) since Fri 2019-02-01 13:14:31 CET; 
28min ago
Process: 21927 ExecStart=/usr/bin/cloud-init modules --mode=final 
(code=exited, status=1/FAILURE)
   Main PID: 21927 (code=exited, status=1/FAILURE)
  -

  Version-Release number of selected component (if applicable):

  Red Hat Enterprise Linux Server release 7.6 (Maipo)
  cloud-init-18.2-1.el7_6.1.x86_64

  How reproducible:

  Steps to Reproduce:
  1. [root@rhvm ~]# cloud-init modules --mode=final

  Actual results:

  [root@rhvm ~]# cloud-init modules --mode final
  Cloud-init v. 18.2 running 'modules:final' at Wed, 06 Feb 2019 20:00:14 
+. Up 10634.29 seconds.
  Cloud-init v. 18.2 finished at Wed, 06 Feb 2019 20:00:15 +. Datasource 
DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 10634.40 seconds
  Traceback (most recent call last):
File "/usr/bin/cloud-init", line 9, in 
  load_entry_point('cloud-init==18.2', 'console_scripts', 'cloud-init')()
File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 882, in 
main
  get_uptime=True, func=functor, args=(name, args))
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 2388, in 
log_time
  ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/cloudinit/cmd/main.py", line 679, in 
status_wrapper
  if v1[m]['errors']:
  KeyError: 'modules-init'

  
  Expected results:

  [root@rhvm ~]# cloud-init modules --mode final
  Cloud-init v. 18.2 running 'modules:final' at Wed, 06 Feb 2019 19:41:50 
+. Up 9530.23 seconds.
  Cloud-init v. 18.2 finished at Wed, 06 Feb 2019 19:41:50 +. Datasource 
DataSourceNoCloud [seed=/dev/sr0][dsmode=net].  Up 9530.34 seconds

  
  Additional info:

  This problem do not happens with previous cloud-init version:

  cloud-init.x8

[Yahoo-eng-team] [Bug 1815051] Re: Bionic netplan render invalid yaml duplicate anchor declaration for nameserver entries

2019-02-07 Thread Ryan Harper
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => High

** Changed in: cloud-init (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1815051

Title:
  Bionic netplan render invalid yaml duplicate anchor declaration for
  nameserver entries

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  In Progress
Status in cloud-init source package in Xenial:
  New
Status in cloud-init source package in Bionic:
  New
Status in cloud-init source package in Cosmic:
  New

Bug description:
  The netplan configuration redeclares the nameservers anchor for every
  single section (vlans, bonds), and use the same id for similar entries
  (id001).

  In this specific case the network configuration in maas have a bond0
  with two vlans, bond0.3502 and bond0.3503, and an untagged bond1
  without vlans. The rendered 50-cloud-init.yaml looks like this:

  network:
  version: 2
  ethernets:
  ...
  bonds:
  ...
  bond1:
  ...
  nameservers: &id001 <- anchor declaration here
  addresses:
  - 255.255.255.1
  - 255.255.255.2
  - 255.255.255.3
  - 255.255.255.5
  search:
  - customer.domain
  - maas
  ...
  bondM:
  ...
  nameservers: *id001

 vlans:
  bond0.3502:
  ...
  nameservers: &id001 <- anchor redeclaration here
  addresses:
  - 255.255.255.1
  - 255.255.255.2
  - 255.255.255.3
  - 255.255.255.5
  search:
  - customer.domain
  - maas
  bond0.3503:
  ...
  nameservers: *id001

  As the cloudinit renders an invalid yaml file, the netplan apply
  produces the following error: (due to the anchor redeclaration in the
  vlans section):

 Invalid YAML at /etc/netplan/50-cloud-init.yaml line 118 column 25:
  second occurence

  This render bug prevents us using the untagged bond and the bond with
  the vlans in the same configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1815051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815051] Re: Bionic netplan render invalid yaml duplicate anchor declaration for nameserver entries

2019-02-07 Thread Ryan Harper
** Also affects: cloud-init (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1815051

Title:
  Bionic netplan render invalid yaml duplicate anchor declaration for
  nameserver entries

Status in cloud-init:
  In Progress
Status in cloud-init package in Ubuntu:
  In Progress
Status in cloud-init source package in Xenial:
  New
Status in cloud-init source package in Bionic:
  New
Status in cloud-init source package in Cosmic:
  New

Bug description:
  The netplan configuration redeclares the nameservers anchor for every
  single section (vlans, bonds), and use the same id for similar entries
  (id001).

  In this specific case the network configuration in maas have a bond0
  with two vlans, bond0.3502 and bond0.3503, and an untagged bond1
  without vlans. The rendered 50-cloud-init.yaml looks like this:

  network:
  version: 2
  ethernets:
  ...
  bonds:
  ...
  bond1:
  ...
  nameservers: &id001 <- anchor declaration here
  addresses:
  - 255.255.255.1
  - 255.255.255.2
  - 255.255.255.3
  - 255.255.255.5
  search:
  - customer.domain
  - maas
  ...
  bondM:
  ...
  nameservers: *id001

 vlans:
  bond0.3502:
  ...
  nameservers: &id001 <- anchor redeclaration here
  addresses:
  - 255.255.255.1
  - 255.255.255.2
  - 255.255.255.3
  - 255.255.255.5
  search:
  - customer.domain
  - maas
  bond0.3503:
  ...
  nameservers: *id001

  As the cloudinit renders an invalid yaml file, the netplan apply
  produces the following error: (due to the anchor redeclaration in the
  vlans section):

 Invalid YAML at /etc/netplan/50-cloud-init.yaml line 118 column 25:
  second occurence

  This render bug prevents us using the untagged bond and the bond with
  the vlans in the same configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1815051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813253] Re: Update the processing of assigned addresses when assigning addresses

2019-02-07 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/633406
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1746d7e0e682eab1001bfc434b73efb1cdf5f0f4
Submitter: Zuul
Branch:master

commit 1746d7e0e682eab1001bfc434b73efb1cdf5f0f4
Author: liuchengqian90 
Date:   Sun Jan 27 20:47:50 2019 +0800

Update the processing of assigned addresses when assigning addresses

1.It is best not to use 'netaddr.IPSet.add',
  because _compact_single_network in 'IPSet.add' is quite time consuming

2.When the current address pool does not have enough addresses,
  all addresses are allocated from the current pool,
  and allocations are continued from the next address pool
  until all addresses are assigned.

Change-Id: I804a95fdaa3552c785e85ffab7b8ac849c634a87
Closes-Bug: #1813253


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813253

Title:
  Update the processing of assigned addresses when assigning addresses

Status in neutron:
  Fix Released

Bug description:
  openstack allinone.


  16 cpus with 2.40GHz
  16G memery

  only one subnet in a network.

  1. Neutron-server takes a long time when creating a port with multiple
  addresses at once

  neutron port-create --fixed-ip subnet_id=474b9575-b569-48dd-
  a3f2-0b02b752c098 --fixed-ip subnet_id=474b9575-b569-48dd-
  a3f2-0b02b752c098 --fixed-ip subnet_id=474b9575-b569-48dd-
  a3f2-0b02b752c098 --fixed-ip subnet_id=474b9575-b569-48dd-
  a3f2-0b02b752c098 --fixed-ip subnet_id=474b9575-b569-48dd-
  a3f2-0b02b752c098 --fixed-ip subnet_id=474b9575-b569-48dd-
  a3f2-0b02b752c098 --fixed-ip subnet_id=474b9575-b569-48dd-
  a3f2-0b02b752c098 --fixed-ip subnet_id=474b9575-b569-48dd-
  a3f2-0b02b752c098 --fixed-ip subnet_id=474b9575-b569-48dd-
  a3f2-0b02b752c098 --fixed-ip subnet_id=474b9575-b569-48dd-
  a3f2-0b02b752c098 66105fab-7ea9-4efc-b9c8-1839f742bdd1

  When there are already 6300+ addresses, it takes 50s to allocate the
  address part every time the command is run.

  in _generate_ips,
  2s+ for list_allocations  (not fixed in this fix)
  2s+ for ip_allocations.add(ipallocation.ip_address)
  10 times.

  https://github.com/drkjam/netaddr/issues/171

  - list_allocations
  I found it is slow because of log in process_rows method. 
https://github.com/sqlalchemy/sqlalchemy/blob/master/lib/sqlalchemy/engine/result.py
  I turn off 'connection_debug' in neutron.conf ,Speed increases quickly, '2 
seconds , 741903 microseconds' to '711636 microseconds'.

  2.When the current address pool does not have enough addresses,
all addresses are allocated from the current pool,
and allocations are continued from the next address pool
until all addresses are assigned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815142] [NEW] ovsdbapp.exceptions.TimeoutException in functional tests

2019-02-07 Thread Boden R
Public bug reported:

It appears that as of recent we've been getting some OVS timeouts in various 
functional jobs [1].
One example is [2] that has a exception message with:

ovsdbapp.exceptions.TimeoutException: Commands
[, , ] exceeded timeout 10 seconds


Based on the logstash query [1] it appears this occasionally impacts functional 
jobs for neutron and networking-ovn.


[1] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%20%5C%22ovsdbapp.exceptions.TimeoutException%3A%20Commands%5C%22
[2] 
http://logs.openstack.org/83/633283/2/check/neutron-functional-python27/48c9c98/job-output.txt.gz#_2019-02-07_12_36_06_518870

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815142

Title:
  ovsdbapp.exceptions.TimeoutException in functional tests

Status in neutron:
  New

Bug description:
  It appears that as of recent we've been getting some OVS timeouts in various 
functional jobs [1].
  One example is [2] that has a exception message with:

  ovsdbapp.exceptions.TimeoutException: Commands
  [, , ] exceeded timeout 10 seconds

  
  Based on the logstash query [1] it appears this occasionally impacts 
functional jobs for neutron and networking-ovn.


  
  [1] 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%20%5C%22ovsdbapp.exceptions.TimeoutException%3A%20Commands%5C%22
  [2] 
http://logs.openstack.org/83/633283/2/check/neutron-functional-python27/48c9c98/job-output.txt.gz#_2019-02-07_12_36_06_518870

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815153] [NEW] Requested host during cold migrate is ignored if server created before Rocky

2019-02-07 Thread Matt Riedemann
Public bug reported:

I stumbled across this during a failing functional test:

https://review.openstack.org/#/c/635668/2/nova/conductor/tasks/migrate.py@263

In Rocky, new RequestSpec objects have the is_bfv field set, but change
https://review.openstack.org/#/c/583715/ was added to 'heal' old
RequestSpecs when servers created before Rocky are migrated (cold
migrate, live migrate, unshelve and evacuate).

The problem is change https://review.openstack.org/#/c/610098/ made the
RequestSpec.save() operation stop persisting the requested_destination
field, which means when heal_reqspec_is_bfv saves the is_bfv change to
the RequestSpec, the requested_destination is lost and the user-
specified target host is not honored (this would impact all move APIs
that target a target host, so cold migrate, live migrate and evacuate).

The simple way to fix it is by not overwriting the set
requested_destination field during save (don't persist it in the
database, but don't reset it to None in the object in memory):

https://review.openstack.org/#/c/635668/2/nova/objects/request_spec.py@517

This could also be a problem for the 'network_metadata' field added in
Rocky:

https://review.openstack.org/#/c/564442/

** Affects: nova
 Importance: High
 Status: Triaged

** Affects: nova/rocky
 Importance: High
 Status: Triaged


** Tags: migration

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => Triaged

** Changed in: nova/rocky
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815153

Title:
  Requested host during cold migrate is ignored if server created before
  Rocky

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) rocky series:
  Triaged

Bug description:
  I stumbled across this during a failing functional test:

  https://review.openstack.org/#/c/635668/2/nova/conductor/tasks/migrate.py@263

  In Rocky, new RequestSpec objects have the is_bfv field set, but
  change https://review.openstack.org/#/c/583715/ was added to 'heal'
  old RequestSpecs when servers created before Rocky are migrated (cold
  migrate, live migrate, unshelve and evacuate).

  The problem is change https://review.openstack.org/#/c/610098/ made
  the RequestSpec.save() operation stop persisting the
  requested_destination field, which means when heal_reqspec_is_bfv
  saves the is_bfv change to the RequestSpec, the requested_destination
  is lost and the user-specified target host is not honored (this would
  impact all move APIs that target a target host, so cold migrate, live
  migrate and evacuate).

  The simple way to fix it is by not overwriting the set
  requested_destination field during save (don't persist it in the
  database, but don't reset it to None in the object in memory):

  https://review.openstack.org/#/c/635668/2/nova/objects/request_spec.py@517

  This could also be a problem for the 'network_metadata' field added in
  Rocky:

  https://review.openstack.org/#/c/564442/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1800417] Re: Network: concurrent issue for create network operation

2019-02-07 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1800417

Title:
  Network: concurrent issue for create network operation

Status in neutron:
  Expired

Bug description:
  High level description:
  When running rally test-cases in parallel against the network creation API it 
is possible to encounter errors while attempting to update a segmentation 
allocation record in the DB.  This results from querying the database to find a 
free entry and then updating the tuple in independent operations without any 
sort of mutual exclusion over multiple users.  Since the neutron API is 
implemented with multiple child processes it is possible that a collision will 
occur when two processes attempt to access the same DB tuple.

  Version: latest devstack

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1800417/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815165] [NEW] Creating a snapshot in Consistency Group was failed with Traceback of TemplateDoesNotExist

2019-02-07 Thread Keigo Noha
Public bug reported:

Creating a snapshot in Consistency Group was failed with Traceback of
TemplateDoesNotExist.

The traceback is below.

~~~
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/django/core/handlers/exception.py", 
line 41, in inner
response = get_response(request)
  File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
249, in _legacy_get_response
response = self._get_response(request)
  File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
217, in _get_response
response = self.process_exception_by_middleware(e, request)
  File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
215, in _get_response
response = response.render()
..
  File "/usr/lib/python2.7/site-packages/django/template/loader_tags.py", line 
204, in render
template = context.template.engine.get_template(template_name)
  File "/usr/lib/python2.7/site-packages/django/template/engine.py", line 162, 
in get_template
template, origin = self.find_template(template_name)
  File "/usr/lib/python2.7/site-packages/django/template/engine.py", line 148, 
in find_template
raise TemplateDoesNotExist(name, tried=tried)
TemplateDoesNotExist: project/volumes/cgroups/_snapshot_limits.html
~~~

Consistency Group panel was moved from Volumes 
panel,f85e0ffa9135a60ff203651ce7bfd5c0ca36ceec.
Some templates weren't modified at that time.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1815165

Title:
  Creating a snapshot in Consistency Group was failed with Traceback of
  TemplateDoesNotExist

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Creating a snapshot in Consistency Group was failed with Traceback of
  TemplateDoesNotExist.

  The traceback is below.

  ~~~
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/handlers/exception.py", 
line 41, in inner
  response = get_response(request)
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
249, in _legacy_get_response
  response = self._get_response(request)
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
217, in _get_response
  response = self.process_exception_by_middleware(e, request)
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
215, in _get_response
  response = response.render()
  ..
File "/usr/lib/python2.7/site-packages/django/template/loader_tags.py", 
line 204, in render
  template = context.template.engine.get_template(template_name)
File "/usr/lib/python2.7/site-packages/django/template/engine.py", line 
162, in get_template
  template, origin = self.find_template(template_name)
File "/usr/lib/python2.7/site-packages/django/template/engine.py", line 
148, in find_template
  raise TemplateDoesNotExist(name, tried=tried)
  TemplateDoesNotExist: project/volumes/cgroups/_snapshot_limits.html
  ~~~

  Consistency Group panel was moved from Volumes 
panel,f85e0ffa9135a60ff203651ce7bfd5c0ca36ceec.
  Some templates weren't modified at that time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1815165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815167] [NEW] Manage Compute services in nova manual typos

2019-02-07 Thread hyunsik Yang
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [x] I have a fix to the document that I can paste below including example: 
input and output. 

 openstack compute service set --disable --disable-reason trial log nova 
nova-compute
 openstack compute service set --disable --disable-reason "trial log" nova 
nova-compute

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 16.1.8.dev9 on 2019-02-05 18:29
SHA: 415c94cdf8cc5a5288bdf00fad0fed2ee79f411c
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/services.rst
URL: https://docs.openstack.org/nova/pike/admin/services.html

** Affects: nova
 Importance: Undecided
 Assignee: hyunsik Yang (yangun)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => hyunsik Yang (yangun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815167

Title:
  Manage Compute services in nova manual typos

Status in OpenStack Compute (nova):
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output. 

   openstack compute service set --disable --disable-reason trial log nova 
nova-compute
   openstack compute service set --disable --disable-reason "trial log" nova 
nova-compute

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 16.1.8.dev9 on 2019-02-05 18:29
  SHA: 415c94cdf8cc5a5288bdf00fad0fed2ee79f411c
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/services.rst
  URL: https://docs.openstack.org/nova/pike/admin/services.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815166] [NEW] Manage Compute services in nova manual typos

2019-02-07 Thread hyunsik Yang
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [ ] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [x] I have a fix to the document that I can paste below including example: 
input and output. 

 openstack compute service set --disable --disable-reason trial log nova 
nova-compute
 openstack compute service set --disable --disable-reason "trial log" nova 
nova-compute

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 16.1.8.dev9 on 2019-02-05 18:29
SHA: 415c94cdf8cc5a5288bdf00fad0fed2ee79f411c
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/services.rst
URL: https://docs.openstack.org/nova/pike/admin/services.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815166

Title:
  Manage Compute services in nova manual typos

Status in OpenStack Compute (nova):
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [ ] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output. 

   openstack compute service set --disable --disable-reason trial log nova 
nova-compute
   openstack compute service set --disable --disable-reason "trial log" nova 
nova-compute

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 16.1.8.dev9 on 2019-02-05 18:29
  SHA: 415c94cdf8cc5a5288bdf00fad0fed2ee79f411c
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/services.rst
  URL: https://docs.openstack.org/nova/pike/admin/services.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1815166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp