[Yahoo-eng-team] [Bug 1819702] Re: router_id None is invalid in port forwarding

2019-03-14 Thread LIU Yulong
*** This bug is a duplicate of bug 1799135 ***
https://bugs.launchpad.net/bugs/1799135

** This bug has been marked a duplicate of bug 1799135
   [l3][port_forwarding] update floating IP (has binding port_forwarding) with 
empty {} input will lose router_id

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1819702

Title:
  router_id None is invalid in port forwarding

Status in neutron:
  Incomplete

Bug description:
  we use portforwarding in rocky (and it's very nice!), but we ran into an 
issue: 
  one of our users does something with heat, and after that we can't list the 
portforwardings on a floating ip anymore.

  the ony way out is to create a new forwarding, which adds the
  router_id again and all is well then.

  
  the server reports a "ValueError: Field `router_id' cannot be None".

  
  it looks (well, from my pov) like some inconsistency between the plugin code 
that actually sets the router_id to None (on removal of last pf?) and the 
object that doesn't allow router_id to be None. (or that the removal of the 
(last?) forward failed somehow).

  some details:

  
  floating ip show reports:
  +-+--+
  | Field   | Value|
  +-+--+
  | created_at  | 2019-01-10T16:00:34Z |
  | description |  |
  | dns_domain  | None |
  | dns_name| None |
  | fixed_ip_address| None |
  | floating_ip_address | 1.2.3.4   |
  | floating_network_id | fc06776a-02df-4962-8513-aea8b8177fd2 |
  | id  | 6e61da15-d150-4c53-8a3f-914c36380459 |
  | name| 1.2.3.4|
  | port_details| None |
  | port_id | None |
  | project_id  | 676a889223804b8fb4ddf55319530e91 |
  | qos_policy_id   | None |
  | revision_number | 304  |
  | router_id   | None |
  | status  | ACTIVE   |
  | subnet_id   | None |
  | tags| []   |
  | updated_at  | 2019-03-07T13:22:36Z |
  +-+--+


  (partial) json repsonse for that floating ip (from openstack -vv
  output):

   {"router_id": null, "status": "ACTIVE", "description": "", "tags":
  [], "port_id": null, "created_at": "2019-01-10T16:00:34Z",
  "updated_at": "2019-03-07T13:22:36Z", "floating_network_id":
  "fc06776a-02df-4962-8513-aea8b8177fd2", "port_details": null,
  "fixed_ip_address": null, "floating_ip_address": "1.2.3.4",
  "revision_number": 304, "tenant_id":
  "676a889223804b8fb4ddf55319530e91", "project_id":
  "676a889223804b8fb4ddf55319530e91", "port_forwardings": [{"protocol":
  "tcp", "internal_ip_address": "10.10.2.4", "internal_port": 22,
  "external_port": 50023}], "id": "6e61da15-d150-4c53-8a3f-
  914c36380459"}


  stack trace below

  
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource 
[req-de8e42f9-1469-4ee6-a73a-1a06b172edb5 bb762ad156de46f6888bf2ae1001cade 
676a889223804b8f
  b4ddf55319530e91 - abc44e40a0df46c2a00bb5f6109f 
abc44e40a0df46c2a00bb5f6109f] index failed: No details.: ValueError: Field 
`router_id' cannot 
  be None
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 98, in 
resource
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron_lib/db/api.py", line 140, in wrapped
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource 
self.force_reraise()
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2019-03-12 14:58:03.357 4108 ERROR neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1819899] Re: Self Signed Cert Phone Home

2019-03-14 Thread William Grant
** Summary changed:

- Buy Tramadol Online to Overcome Psoriatic Arthritis
+ Self Signed Cert Phone Home

** Information type changed from Private to Public

** Project changed: null-and-void => cloud-init

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1819899

Title:
  Self Signed Cert Phone Home

Status in cloud-init:
  Invalid

Bug description:
  I use LXD quite heavily and as part of container creation process it
  is common to want to phone home after the container has created, the
  phone home server has a self signed certificate so we have to add the
  certificate through the appropriate keys, can there be a key to not
  verify the ssl certificate on phone home?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1819899/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819899] [NEW] Self Signed Cert Phone Home

2019-03-14 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

I use LXD quite heavily and as part of container creation process it is
common to want to phone home after the container has created, the phone
home server has a self signed certificate so we have to add the
certificate through the appropriate keys, can there be a key to not
verify the ssl certificate on phone home?

** Affects: cloud-init
 Importance: Undecided
 Status: Invalid

-- 
Self Signed Cert Phone Home
https://bugs.launchpad.net/bugs/1819899
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to cloud-init.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815051] Re: Bionic netplan render invalid yaml duplicate anchor declaration for nameserver entries

2019-03-14 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init -
18.5-45-g3554ffe8-0ubuntu1

---
cloud-init (18.5-45-g3554ffe8-0ubuntu1) disco; urgency=medium

  * New upstream snapshot.
- cloud-init-per: POSIX sh does not support string subst, use sed
  (LP: #1819222)

 -- Daniel Watkins   Fri, 08 Mar 2019 17:42:34
-0500

** Changed in: cloud-init (Ubuntu)
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1815051

Title:
  Bionic netplan render invalid yaml duplicate anchor declaration for
  nameserver entries

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Bionic:
  Confirmed
Status in cloud-init source package in Cosmic:
  Confirmed

Bug description:
  The netplan configuration redeclares the nameservers anchor for every
  single section (vlans, bonds), and use the same id for similar entries
  (id001).

  In this specific case the network configuration in maas have a bond0
  with two vlans, bond0.3502 and bond0.3503, and an untagged bond1
  without vlans. The rendered 50-cloud-init.yaml looks like this:

  network:
  version: 2
  ethernets:
  ...
  bonds:
  ...
  bond1:
  ...
  nameservers:  <- anchor declaration here
  addresses:
  - 255.255.255.1
  - 255.255.255.2
  - 255.255.255.3
  - 255.255.255.5
  search:
  - customer.domain
  - maas
  ...
  bondM:
  ...
  nameservers: *id001

 vlans:
  bond0.3502:
  ...
  nameservers:  <- anchor redeclaration here
  addresses:
  - 255.255.255.1
  - 255.255.255.2
  - 255.255.255.3
  - 255.255.255.5
  search:
  - customer.domain
  - maas
  bond0.3503:
  ...
  nameservers: *id001

  As the cloudinit renders an invalid yaml file, the netplan apply
  produces the following error: (due to the anchor redeclaration in the
  vlans section):

 Invalid YAML at /etc/netplan/50-cloud-init.yaml line 118 column 25:
  second occurence

  This render bug prevents us using the untagged bond and the bond with
  the vlans in the same configuration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1815051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819944] Re: nova-grenade-live-migration job failing on ubuntu bionic due to missing ceph package

2019-03-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/643150
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=9b2a7f9e7c9c24ad5b698f78681a1de1593b4a53
Submitter: Zuul
Branch:master

commit 9b2a7f9e7c9c24ad5b698f78681a1de1593b4a53
Author: melanie witt 
Date:   Wed Mar 13 19:45:44 2019 +

Re-enable Ceph in live migration testing

Revert I05182d8fd0df5e8f3f9f4fb11feed074990cdb9f and
Add fix to enable proper OS detection.

Closes-Bug: #1819944

Co-Authored-By: Jens Harbott 

Change-Id: Iea6288fe6d341ee92f87a35e0b0a59fe564ab96c


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1819944

Title:
  nova-grenade-live-migration job failing on ubuntu bionic due to
  missing ceph package

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  ceph hammer are not present in bionic and so nova-grenade-live-migration fail 
to run on bionic
  - https://review.openstack.org/#/c/639017/8

  http://logs.openstack.org/17/639017/8/check/nova-grenade-live-
  migration/735242b/ara-report/result/8816413c-7053-418f-886b-
  e238fdea1ec5/

  10:59 AM E: Failed to fetch http://download.ceph.com/debian-
  hammer/dists/bionic/main/binary-amd64/Packages  404  Not Found [IP:
  2607:5300:201:2000::3:58a1 80]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1819944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817458] Re: duplicate allocation candidates with granular request

2019-03-14 Thread Matt Riedemann
https://storyboard.openstack.org/#!/project/openstack/placement

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1817458

Title:
  duplicate allocation candidates with granular request

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===

  When we request shared resource granularly, we can get duplicate
  allocation candidates for a same resource provider

  How to reproduce
  
   
  1. Set up
  1-1. Set up two compute nodes (cn1, cn2 with VCPU resources)
  1-2. Set up one shared storage (ss1 with DISK_GB resources) marked with 
"MISC_SHARES_VIA_AGGREGATE"
  1-3. Put all of them in one aggregate

  2. Request only DISK_GB resource with granular request
  -> you will get duplicate allocation request of DISK_GB resource on ss1
   
  (NOTE): non-granular requests don't provide such duplicate entry

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1817458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819910] Re: test_server_connectivity_live_migration intermittently fails with NoValidHost due to DestinationHypervisorTooOld

2019-03-14 Thread Matt Riedemann
Removing nova from this for tracking purposes. I can open a separate bug
for the side effect logging issue in nova.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1819910

Title:
  test_server_connectivity_live_migration intermittently fails with
  NoValidHost due to DestinationHypervisorTooOld

Status in tempest:
  In Progress

Bug description:
  Seen here:

  http://logs.openstack.org/94/637594/1/gate/tempest-
  slow/90def65/controller/logs/screen-n-super-
  cond.txt.gz#_Mar_13_01_08_16_120854

  Mar 13 01:08:16.120854 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]: WARNING nova.scheduler.utils [None 
req-a9345308-4e20-4976-ac6d-a1c529d14b16 
tempest-TestNetworkAdvancedServerOps-840936673 
tempest-TestNetworkAdvancedServerOps-840936673] Failed to 
compute_task_migrate_server: No valid host was found. There are not enough 
hosts available.
  Mar 13 01:08:16.121077 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]: Traceback (most recent call last):
  Mar 13 01:08:16.121335 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]:   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
226, in inner
  Mar 13 01:08:16.121545 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]: return func(*args, **kwargs)
  Mar 13 01:08:16.121819 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]:   File "/opt/stack/nova/nova/scheduler/manager.py", line 
154, in select_destinations
  Mar 13 01:08:16.122027 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]: allocation_request_version, return_alternates)
  Mar 13 01:08:16.122232 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]:   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 91, in 
select_destinations
  Mar 13 01:08:16.122518 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]: allocation_request_version, return_alternates)
  Mar 13 01:08:16.122719 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]:   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 244, in _schedule
  Mar 13 01:08:16.122924 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]: claimed_instance_uuids)
  Mar 13 01:08:16.123128 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]:   File 
"/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 281, in 
_ensure_sufficient_hosts
  Mar 13 01:08:16.123322 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]: raise exception.NoValidHost(reason=reason)
  Mar 13 01:08:16.123623 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]: NoValidHost: No valid host was found. There are not 
enough hosts available.
  Mar 13 01:08:16.123829 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]: : NoValidHost_Remote: No valid host was found. There are 
not enough hosts available.
  Mar 13 01:08:16.124035 ubuntu-xenial-inap-mtl01-0003740112 
nova-conductor[25115]: WARNING nova.scheduler.utils [None 
req-a9345308-4e20-4976-ac6d-a1c529d14b16 
tempest-TestNetworkAdvancedServerOps-840936673 
tempest-TestNetworkAdvancedServerOps-840936673] [instance: 
c2fd36f9-0b41-439c-9700-17aa0abe13c2] Setting instance to ACTIVE state.: 
NoValidHost_Remote: No valid host was found. There are not enough hosts 
available.

  And it looks like that is actually due to this:

  http://logs.openstack.org/94/637594/1/gate/tempest-
  slow/90def65/controller/logs/screen-n-super-
  cond.txt.gz#_Mar_13_01_08_15_842358

  Mar 13 01:08:15.842358 ubuntu-xenial-inap-mtl01-0003740112 nova-
  conductor[25115]: WARNING nova.scheduler.client.report [None
  req-a9345308-4e20-4976-ac6d-a1c529d14b16 tempest-
  TestNetworkAdvancedServerOps-840936673 tempest-
  TestNetworkAdvancedServerOps-840936673] Failed to save allocation for
  c2fd36f9-0b41-439c-9700-17aa0abe13c2. Got HTTP 400: {"errors":
  [{"status": 400, "request_id": "req-2ba69e9f-63ac-
  4f71-9dd0-2d5b97fcfbe8", "detail": "The server could not comply with
  the request since it is either malformed or otherwise incorrect.\n\n
  JSON does not validate: {} does not have enough properties  Failed
  validating 'minProperties' in
  schema['properties']['allocations']['items']['properties']['resources']:
  {'additionalProperties': False,  'minProperties': 1,
  'patternProperties': {'^[0-9A-Z_]+$': {'minimum': 1,
  'type': 'integer'}},  'type': 'object'}  On
  instance['allocations'][0]['resources']: {}  ", "title": "Bad
  Request"}]}: DestinationHypervisorTooOld: The instance requires a
  newer hypervisor version than has been provided.

  The DestinationHypervisorTooOld error there is misleading, it looks
  like the real failure was an allocation claim in placement:

  http://logs.openstack.org/94/637594/1/gate/tempest-
  slow/90def65/controller/logs/screen-placement-
  api.txt.gz#_Mar_13_01_08_15_839694

  Mar 13 

[Yahoo-eng-team] [Bug 1819568] Re: network_data.json doesn't contain information about floating IPs

2019-03-14 Thread Matt Riedemann
Yeah looks like this is the "public-ipv4" key in the ec2 metadata
response. Note that it only returns the first, even if multiple floating
IPs are attached to the server.

This is not really a bug though, it is at least a blueprint or if we're
following the rules about API changes it would also need a spec (but
that might be heavy for this, so bring it up in the nova team meeting).

** Tags added: api metadata

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

** Summary changed:

- network_data.json doesn't contain information about floating IPs
+ RFE: network_data.json doesn't contain information about floating IPs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1819568

Title:
  RFE: network_data.json doesn't contain information about floating IPs

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Don't seem to be able to get floating IP information from openstack metadata 
network_data.json
  I can get this via EC2 metadata and would be good if it was in openstack 
metadata too

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1819568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1820018] Re: instance breaks the affinity/anti-affinity of server group with force_hosts or force_nodes

2019-03-14 Thread Matt Riedemann
This is arguably working as designed, see the ML discussion:

http://lists.openstack.org/pipermail/openstack-
discuss/2019-March/003813.html

I have added this as a topic for discussion at the Train release PTG:

https://etherpad.openstack.org/p/nova-ptg-train

** Changed in: nova
   Status: New => Opinion

** Tags added: api scheduler server-groups

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1820018

Title:
  instance breaks the affinity/anti-affinity of server group with
  force_hosts or force_nodes

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  Description
  ===
  Boot a server with server-group(affinity/anti-affinity) and force_host. The 
second instance broken 
  the affinity/anti-affinity.

  
  Steps to reproduce
  ==
  We create a server-group(with anti-affinity policy). The we begin to test, as 
followed:
  1. Create the first instance with the server-group
  2. Create the second instance with the server-group and with force_host(the 
host of first instance)

  Command:
  1. nova boot  --flavor  --image  
--availability-zone  --security-groups  --nic net-id= 
--hint group=
  2. nova boot  --flavor  --image  
--availability-zone : --security-groups  --nic 
net-id= --hint group=

  
  Expected result
  ===
  ?? Did we expect that the second instance fails to boot.

  
  Actual result
  =
  The second instance succeeds to boot.

  
  Environment
  ===
  1. The openstack version is master. And setup the env by devstack tool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1820018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1820125] [NEW] Libvirt driver ungracefully explodes if unsupported arch is found

2019-03-14 Thread Dan Smith
Public bug reported:

If a new libvirt exposes an arch name that nova does not support, we
fail to gracefully skip it during the instance capability gathering:

2019-03-14 19:11:31.709 6 ERROR nova.compute.manager 
[req-4e626631-fefc-4c58-a1cd-5207c9384a1b - - - - -] Error updating resources 
for node primary.: InvalidArchitectureName: Architecture name 'armv6l' is not 
recognised
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager Traceback (most recent 
call last):
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 7956, in _update_available_resource_for_node
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager startup=startup)
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 727, in update_available_resource
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(nodename)
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 7070, in get_available_resource
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager 
data["supported_instances"] = self._get_instance_capabilities()
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 5943, in _get_instance_capabilities
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager 
fields.Architecture.canonicalize(g.arch),
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/objects/fields.py", 
line 200, in canonicalize
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager raise 
exception.InvalidArchitectureName(arch=name)
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager InvalidArchitectureName: 
Architecture name 'armv6l' is not recognised
2019-03-14 19:11:31.709 6 ERROR nova.compute.manager

** Affects: nova
 Importance: Undecided
 Assignee: Dan Smith (danms)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1820125

Title:
  Libvirt driver ungracefully explodes if unsupported arch is found

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  If a new libvirt exposes an arch name that nova does not support, we
  fail to gracefully skip it during the instance capability gathering:

  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager 
[req-4e626631-fefc-4c58-a1cd-5207c9384a1b - - - - -] Error updating resources 
for node primary.: InvalidArchitectureName: Architecture name 'armv6l' is not 
recognised
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager Traceback (most recent 
call last):
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 7956, in _update_available_resource_for_node
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager startup=startup)
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 727, in update_available_resource
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(nodename)
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 7070, in get_available_resource
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager 
data["supported_instances"] = self._get_instance_capabilities()
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/virt/libvirt/driver.py",
 line 5943, in _get_instance_capabilities
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager 
fields.Architecture.canonicalize(g.arch),
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/objects/fields.py", 
line 200, in canonicalize
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager raise 
exception.InvalidArchitectureName(arch=name)
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager InvalidArchitectureName: 
Architecture name 'armv6l' is not recognised
  2019-03-14 19:11:31.709 6 ERROR nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1820125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607345] Re: Collect all logs needed to debug curtin/cloud-init for each deployment

2019-03-14 Thread Chad Smith
** Changed in: cloud-init (Ubuntu)
   Status: Fix Released => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Released => Confirmed

** Changed in: cloud-init
   Importance: Wishlist => Medium

** Changed in: cloud-init
 Assignee: Chad Smith (chad.smith) => (unassigned)

** Description changed:

- According to https://bugs.launchpad.net/maas/+bug/1604962/comments/12,
- these logs are needed to debug curtin/cloud-init issues but aren't
- collected automatically by MAAS:
+ Re-opening this bug as confirmed because the previous SRU content
+ released only provided only 'cloud-init collect-logs'. A command line
+ tool which tars all cloud-init install logs and artifacts for triage.
+ 
+ However, those fixes did not provide any configuration options for MAAS
+ to request that those logs are automatically published to MAAS upon
+ error.
+ 
+ 
+ Cloud-init should provide cloud-config which allows consumers to specify an 
endpoint and oauth credentials to which cloud-init will automatically POST all 
compressed cloud-init log artifacts.
+ 
+ 
+ === Original Description ===
+ According to https://bugs.launchpad.net/maas/+bug/1604962/comments/12, these 
logs are needed to debug curtin/cloud-init issues but aren't collected 
automatically by MAAS:
  
  - /var/log/cloud-init*
  - /run/cloud-init*
  - /var/log/cloud
  - /tmp/install.log
  
  We need these to be automatically collected by MAAS so we can
  automatically collect them as artifacts in the case of failures in OIL.
  curtin/cloud-init issues can be race conditions that are difficult to
  reproduce manually, so we need to grab the logs required to debug the
  first time it happens.
  
- 
- http://pad.lv/1607345
- https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1607345
- 
  === Begin SRU Template ===
  [Impact]
  ubuntu-bug cloud-init now collects cloud-init-related information for a 
bug-report
  
  [Test Case]
  
  # Launch instance under test
  $ for release in xenial zesty;
-   do
- ref=$release-proposed;
- lxc-proposed-snapshot --proposed --publish $release $ref;
- lxc launch $ref $name;
- sleep 10;
- lxc exec $name ubuntu-bug cloud-init  # And follow the prompts to report 
a bogus bug
-   done
+   do
+ ref=$release-proposed;
+ lxc-proposed-snapshot --proposed --publish $release $ref;
+ lxc launch $ref $name;
+ sleep 10;
+ lxc exec $name ubuntu-bug cloud-init  # And follow the prompts to report 
a bogus bug
+   done
  
  [Regression Potential]
  Worst case scenario is the apport wrapper doesn't work and the developer has 
to file a bug manually instead.
  
  [Other Info]
  Upstream commit at
-   https://git.launchpad.net/cloud-init/commit/?id=ca2730e2ac86b05f7e6
+   https://git.launchpad.net/cloud-init/commit/?id=ca2730e2ac86b05f7e6
  
  === End SRU Template ===

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1607345

Title:
  Collect all logs needed to debug curtin/cloud-init for each deployment

Status in cloud-init:
  Confirmed
Status in MAAS:
  Incomplete
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Zesty:
  Fix Released

Bug description:
  Re-opening this bug as confirmed because the previous SRU content
  released only provided only 'cloud-init collect-logs'. A command line
  tool which tars all cloud-init install logs and artifacts for triage.

  However, those fixes did not provide any configuration options for
  MAAS to request that those logs are automatically published to MAAS
  upon error.

  
  Cloud-init should provide cloud-config which allows consumers to specify an 
endpoint and oauth credentials to which cloud-init will automatically POST all 
compressed cloud-init log artifacts.

  
  === Original Description ===
  According to https://bugs.launchpad.net/maas/+bug/1604962/comments/12, these 
logs are needed to debug curtin/cloud-init issues but aren't collected 
automatically by MAAS:

  - /var/log/cloud-init*
  - /run/cloud-init*
  - /var/log/cloud
  - /tmp/install.log

  We need these to be automatically collected by MAAS so we can
  automatically collect them as artifacts in the case of failures in
  OIL.  curtin/cloud-init issues can be race conditions that are
  difficult to reproduce manually, so we need to grab the logs required
  to debug the first time it happens.

  === Begin SRU Template ===
  [Impact]
  ubuntu-bug cloud-init now collects cloud-init-related information for a 
bug-report

  [Test Case]

  # Launch instance under test
  $ for release in xenial zesty;
    do
  ref=$release-proposed;
  lxc-proposed-snapshot --proposed --publish $release $ref;
  lxc launch $ref $name;
  sleep 10;
  lxc exec $name ubuntu-bug cloud-init  # And follow the prompts to report 
a bogus bug
    done

  [Regression Potential]
  

[Yahoo-eng-team] [Bug 1607345] Re: Collect all logs needed to debug curtin/cloud-init for each deployment

2019-03-14 Thread Chad Smith
** Changed in: cloud-init
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1607345

Title:
  Collect all logs needed to debug curtin/cloud-init for each deployment

Status in cloud-init:
  Confirmed
Status in MAAS:
  Incomplete
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Zesty:
  Fix Released

Bug description:
  According to https://bugs.launchpad.net/maas/+bug/1604962/comments/12,
  these logs are needed to debug curtin/cloud-init issues but aren't
  collected automatically by MAAS:

  - /var/log/cloud-init*
  - /run/cloud-init*
  - /var/log/cloud
  - /tmp/install.log

  We need these to be automatically collected by MAAS so we can
  automatically collect them as artifacts in the case of failures in
  OIL.  curtin/cloud-init issues can be race conditions that are
  difficult to reproduce manually, so we need to grab the logs required
  to debug the first time it happens.

  
  http://pad.lv/1607345
  https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1607345

  === Begin SRU Template ===
  [Impact]
  ubuntu-bug cloud-init now collects cloud-init-related information for a 
bug-report

  [Test Case]

  # Launch instance under test
  $ for release in xenial zesty;
do
  ref=$release-proposed;
  lxc-proposed-snapshot --proposed --publish $release $ref;
  lxc launch $ref $name;
  sleep 10;
  lxc exec $name ubuntu-bug cloud-init  # And follow the prompts to report 
a bogus bug
done

  [Regression Potential]
  Worst case scenario is the apport wrapper doesn't work and the developer has 
to file a bug manually instead.

  [Other Info]
  Upstream commit at
https://git.launchpad.net/cloud-init/commit/?id=ca2730e2ac86b05f7e6

  === End SRU Template ===

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1607345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819982] Re: Misuse of assertTrue/assertFalse

2019-03-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/643194
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=86b3993ceec0c64f4aed02a498fc747ce91eb682
Submitter: Zuul
Branch:master

commit 86b3993ceec0c64f4aed02a498fc747ce91eb682
Author: Takashi NATSUME 
Date:   Thu Mar 14 09:05:21 2019 +0900

Fix misuse of assertTrue/assertFalse

Change-Id: I247705feeb71e20ad5260b0ca1da08de7290ba6e
Closes-Bug: #1819982


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1819982

Title:
  Misuse of assertTrue/assertFalse

Status in neutron:
  Fix Released

Bug description:
  There are some misuse of assertTrue/assertFalse in unit tests.
  They should be fixed.

  
https://github.com/openstack/neutron/blob/add5347f9dbdc5f5857a975731d65b51d890afdb/neutron/tests/unit/services/placement_report/test_plugin.py#L191

  self.assertTrue(mech_driver, mechanism_test.TestMechanismDriver)

  
https://github.com/openstack/neutron/blob/add5347f9dbdc5f5857a975731d65b51d890afdb/neutron/tests/unit/notifiers/test_nova.py#L170

  self.assertFalse(event, None)

  
https://github.com/openstack/neutron/blob/add5347f9dbdc5f5857a975731d65b51d890afdb/neutron/tests/unit/notifiers/test_nova.py#L236

  self.assertFalse(send_events.called, False)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1819982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819299] Re: Keystone Installation Tutorial for Red Hat Enterprise Linux and CentOS in keystone

2019-03-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/642972
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=fd5da18bf61a1ac9653f8f6afc8ddbb91d1f34f1
Submitter: Zuul
Branch:master

commit fd5da18bf61a1ac9653f8f6afc8ddbb91d1f34f1
Author: chenxing 
Date:   Wed Mar 13 16:35:42 2019 +0800

Fix the incorrect release name of project guide

backport: rocky

Change-Id: I592ca4962794dc34cb549c65f911c795cdf7f749
Closes-Bug: #1819299


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1819299

Title:
  Keystone Installation Tutorial for Red Hat Enterprise Linux and CentOS
  in keystone

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: _"This guide documents the 
OpenStack Queens(Should be Rocky) release"
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2019-01-07 15:31
  SHA: 718f4a9c4c55f5766895eff94eda66d420451235
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/index-rdo.rst
  URL: https://docs.openstack.org/keystone/rocky/install/index-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1819299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1806770] Re: DHCP Agent should not release DHCP lease when client ID is not set on port

2019-03-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/623066
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f2111e035424bf714099966ad724e9a4bd604c18
Submitter: Zuul
Branch:master

commit f2111e035424bf714099966ad724e9a4bd604c18
Author: Arjun Baindur 
Date:   Wed Dec 5 12:43:05 2018 -0800

Do not release DHCP lease when no client ID is set on port

The DHCP agent has a really strict enforcement of client ID, which
is part of the DHCP extra options. If a VM advertises a client ID,
DHCP agent will automatically release it's lease whenever *any* other
port is updated/deleted, even if no client ID is set on the port,
because it thinks the client ID has changed.

When reload_allocations() is called, the DHCP agent parses the leases
and hosts files, and gets the list of all the ports in the network from the
DB, computing 3 different sets. The set from the leases file (v4_leases)
could have a client ID, but the set from the port DB and hosts file will
have None.

As a result, the set subtraction does not filter out the entry,
and all ports that have an active lease with a client ID are released.

The Client ID should only be enforced and leases released
if it's actually set in the port DB's DHCP extra Opts.
In that case it means someone knows what they are doing,
and we want to check for a mismatch. If the client ID on a port is
empty, it should not be treated like an unused lease.

We can't expect end users that just create VMs with auto created ports
to know/care about DHCP client IDs, then manually update ports or
change app templates.

In some cases, like Windows VMs, the client ID is advertised as the MAC by 
default.
In fact, there is a Windows bug which prevents you from even turning this 
off:

https://support.microsoft.com/en-us/help/3004537/dhcp-client-always-includes-option-61-in-the-dhcp-request-in-windows-8

Linux VMs don't have this on by default, but it may be enabled
in some templates unknown to users.

Change-Id: I8021f740bd78e654915337bd3287b45b2c422e95
Closes-Bug: #1806770


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1806770

Title:
  DHCP Agent should not release DHCP lease when client ID is not set on
  port

Status in neutron:
  Fix Released

Bug description:
  DHCP agent has a really strict enforcement of client ID, which is part
  of the DHCP extra options. If a VM advertises a client ID, DHCP agent
  will automatically release it's lease whenever *any* other port is
  updated/deleted. This happens even if no client ID is set on the port.

  When reload_allocations() is called, DHCP agent parses the current
  leases file, the hosts file, and gets the list all the ports in the
  network from DB, computing 3 different sets. The set from leases file
  (v4_leases) will have some client ID. The set from port DB will have
  None. As a result the set subtraction does not filter out the entry,
  and the port's DHCP lease is constantly released, whenever the VM
  renews its lease and any other port in the network is deleted:

  
https://github.com/openstack/neutron/blob/stable/pike/neutron/agent/linux/dhcp.py#L850

  v4_leases = set()
  for (k, v) in cur_leases.items():
  # IPv4 leases have a MAC, IPv6 ones do not, so we must ignore
  if netaddr.IPAddress(k).version == constants.IP_VERSION_4:
  # treat '*' as None, see note in _read_leases_file_leases()
  client_id = v['client_id']
  if client_id is '*':
  client_id = None
  v4_leases.add((k, v['iaid'], client_id))

  new_leases = set()
  for port in self.network.ports:
  client_id = self._get_client_id(port)
  for alloc in port.fixed_ips:
  new_leases.add((alloc.ip_address, port.mac_address, 
client_id))

  # If an entry is in the leases or host file(s), but doesn't have
  # a fixed IP on a corresponding neutron port, consider it stale.
  entries_to_release = (v4_leases | old_leases) - new_leases
  if not entries_to_release:
  return

  It was observed in one example of a released lease, its entries looked
  like:

  new_leases (from port DB)
  (u'10.81.96.186', u'fa:16:3e:eb:a1:13', None)
  old_leases (from hosts file)
  ('10.81.96.186', 'fa:16:3e:eb:a1:13', None)
  v4_leases (from leases file - updated by dnsmasq when VM requests)
  ('10.81.96.186', 'fa:16:3e:eb:a1:13', '01:fa:16:3e:eb:a1:13')

  Therefore the entries_to_release did not have that IP, MAC filtered
  out. The client_id in v4_leases entry was coming from a Windows VM,
  which faces a bug that prevents it from disabling client ID.

[Yahoo-eng-team] [Bug 1818614] Re: Various L3HA functional tests fails often

2019-03-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/642295
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=8fec1ffc833eba9b3fc5f812bf881f44b4beba0c
Submitter: Zuul
Branch:master

commit 8fec1ffc833eba9b3fc5f812bf881f44b4beba0c
Author: Slawek Kaplonski 
Date:   Sun Mar 10 22:45:15 2019 +0100

Set initial ha router state in neutron-keepalived-state-change

Sometimes in case of HA routers it may happend that
keepalived will set status of router to MASTER before
neutron-keepalived-state-change daemon will spawn "ip monitor"
to monitor changes of IPs in router's namespace.

In such case neutron-keepalived-state-change process will never
notice that keepalived set router to be MASTER and L3 agent will
not be notified about that so router will not be configured properly.

To avoid such race condition neutron-keepalived-state-change will
now check if VIP address is already configured on ha interface
before it will spawn "ip monitor". If it is already configured
by keepalived, it will notify L3 agent that router is set to
MASTER.

Change-Id: Ie3fe825d65408fc969c478767b411fe0156e9fbc
Closes-Bug: #1818614


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818614

Title:
  Various L3HA functional tests fails often

Status in neutron:
  Fix Released

Bug description:
  Recently many L3 HA related functional tests are failing.
  The common thing in all those errors is fact that it fails when waiting for 
l3 ha router to become master.

  Example stack trace:

  ft2.12: 
neutron.tests.functional.agent.l3.test_ha_router.LinuxBridgeL3HATestCase.test_ha_router_lifecycle_StringException:
 Traceback (most recent call last):
File "neutron/tests/base.py", line 174, in func
  return f(self, *args, **kwargs)
File "neutron/tests/base.py", line 174, in func
  return f(self, *args, **kwargs)
File "neutron/tests/functional/agent/l3/test_ha_router.py", line 81, in 
test_ha_router_lifecycle
  self._router_lifecycle(enable_ha=True, router_info=router_info)
File "neutron/tests/functional/agent/l3/framework.py", line 274, in 
_router_lifecycle
  common_utils.wait_until_true(lambda: router.ha_state == 'master')
File "neutron/common/utils.py", line 690, in wait_until_true
  raise WaitTimeout(_("Timed out after %d seconds") % timeout)
  neutron.common.utils.WaitTimeout: Timed out after 60 seconds

  Example failure: http://logs.openstack.org/79/633979/21/check/neutron-
  functional-python27/ce7ef07/logs/testr_results.html.gz

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ha_state%20%3D%3D%20'master')%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818614/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1820102] [NEW] Incorrect Error message when user change password to a previous password.

2019-03-14 Thread Vishal Manchanda
Public bug reported:

When user change password to its previous password,then user get an
error message "Unable to change password" on gui. but i thinks it need
to raise an error message that "The new password cannot be identical to
a previous password"

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1820102

Title:
  Incorrect Error message when user change password to a previous
  password.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When user change password to its previous password,then user get an
  error message "Unable to change password" on gui. but i thinks it need
  to raise an error message that "The new password cannot be identical
  to a previous password"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1820102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818317] Re: Network filtering by MTU is not supported with the "net-mtu-writable" module

2019-03-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/642900
Committed: 
https://git.openstack.org/cgit/openstack/neutron-lib/commit/?id=31bad8c63742b92d900546df741b086a229a0db7
Submitter: Zuul
Branch:master

commit 31bad8c63742b92d900546df741b086a229a0db7
Author: Hongbin Lu 
Date:   Tue Mar 12 21:44:51 2019 +

Allow filtering/sorting by the 'mtu' field

Change-Id: I4096a9884aec25758438c2f7bd8df212eec20796
Closes-Bug: #1818317


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818317

Title:
  Network filtering by MTU is not supported with the "net-mtu-writable"
  module

Status in neutron:
  Fix Released

Bug description:
  Read-only (https://github.com/openstack/neutron-
  
lib/blob/fc2a81058bfd3ba9fd3501660156c71ff1c8129c/neutron_lib/api/definitions/network_mtu.py#L51..L52)
  MTU supports listing, however writable "net-mtu-writable"
  (https://github.com/openstack/neutron-
  
lib/blob/fc2a81058bfd3ba9fd3501660156c71ff1c8129c/neutron_lib/api/definitions/network_mtu_writable.py#L54..L56)
  doesn't support filtering by MTU:

  Bad request with: [GET 
http://192.168.200.182:9696/v2.0/networks?mtu=1450=TESTACC-ZtUqAVkR]:
  {"NeutronError": {"message": "[u'mtu'] is invalid attribute for filtering", 
"type": "HTTPBadRequest", "detail": ""}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1818317/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607345] Re: Collect all logs needed to debug curtin/cloud-init for each deployment

2019-03-14 Thread Andres Rodriguez
** Changed in: maas
Milestone: 2.5.x => 2.6.0

** No longer affects: maas/2.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1607345

Title:
  Collect all logs needed to debug curtin/cloud-init for each deployment

Status in cloud-init:
  Fix Released
Status in MAAS:
  Incomplete
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Zesty:
  Fix Released

Bug description:
  According to https://bugs.launchpad.net/maas/+bug/1604962/comments/12,
  these logs are needed to debug curtin/cloud-init issues but aren't
  collected automatically by MAAS:

  - /var/log/cloud-init*
  - /run/cloud-init*
  - /var/log/cloud
  - /tmp/install.log

  We need these to be automatically collected by MAAS so we can
  automatically collect them as artifacts in the case of failures in
  OIL.  curtin/cloud-init issues can be race conditions that are
  difficult to reproduce manually, so we need to grab the logs required
  to debug the first time it happens.

  
  http://pad.lv/1607345
  https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1607345

  === Begin SRU Template ===
  [Impact]
  ubuntu-bug cloud-init now collects cloud-init-related information for a 
bug-report

  [Test Case]

  # Launch instance under test
  $ for release in xenial zesty;
do
  ref=$release-proposed;
  lxc-proposed-snapshot --proposed --publish $release $ref;
  lxc launch $ref $name;
  sleep 10;
  lxc exec $name ubuntu-bug cloud-init  # And follow the prompts to report 
a bogus bug
done

  [Regression Potential]
  Worst case scenario is the apport wrapper doesn't work and the developer has 
to file a bug manually instead.

  [Other Info]
  Upstream commit at
https://git.launchpad.net/cloud-init/commit/?id=ca2730e2ac86b05f7e6

  === End SRU Template ===

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1607345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819740] Re: "test_port_ip_update_revises" fails in py37 intermittently

2019-03-14 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/642869
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=44382ac446d32e6300a732f968be0fbc843630e2
Submitter: Zuul
Branch:master

commit 44382ac446d32e6300a732f968be0fbc843630e2
Author: Rodolfo Alonso Hernandez 
Date:   Tue Mar 12 19:27:42 2019 +

Specify tenant_id in TestRevisionPlugin objects

In order to avoid interferences between other tests, the objects
created in TestRevisionPlugin will be created for random
tenant IDs, generated during the execution of each test.

Change-Id: Ica7fe2379c7b1ce516ae7b0cd3959cff88a0b895
Closes-Bug: #1819740


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1819740

Title:
  "test_port_ip_update_revises" fails in py37 intermittently

Status in neutron:
  Fix Released

Bug description:
  Error log [1]:
  Traceback (most recent call last):
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/base.py", 
line 174, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/unit/services/revisions/test_revision_plugin.py",
 line 170, in test_port_ip_update_revises
  response = self._update('ports', port['port']['id'], new)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/neutron/tests/unit/db/test_db_base_plugin_v2.py",
 line 603, in _update
  self.assertEqual(expected_code, res.status_int)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/.tox/py37/lib/python3.7/site-packages/testtools/testcase.py",
 line 411, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/git.openstack.org/openstack/neutron/.tox/py37/lib/python3.7/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 200 != 400

  
  [1] 
http://logs.openstack.org/79/633979/26/check/openstack-tox-py37/e7878ff/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1819740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1820039] [NEW] glance Windows support

2019-03-14 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/630705
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 5759ec0b1cbb9153558d52f55b874ab8b45880bb
Author: Lucian Petrut 
Date:   Fri Jan 4 10:07:57 2019 +

glance Windows support

This change will allow glance services to run on Windows, using
eventlet wsgi for API services.

This change will:
* avoid monkey patching the os module on Windows (which causes Popen
  to fail)
* avoiding unavailable signals
* avoid renaming in-use files or leaking handles
* update the check that ensures that just one scrubber process may
  run at a time. We can't rely on process names as there might be
  wrapper processes that have similar names (no she-bangs on Windows,
  so the scripts are called a bit differently). We'll use a global
  named mutex instead.

A subsequent change will leverage Windows job objects as a
replacement for process groups, also avoiding forking when spawning
workers.

At the moment, some Glance tests cannot run on Windows, which is
also covered by subsequent patches.

DocImpact

blueprint windows-support

Change-Id: I3bca69638685ceb11a1a316511ad9a298c630ad5

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: doc glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1820039

Title:
  glance Windows support

Status in Glance:
  New

Bug description:
  https://review.openstack.org/630705
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/glance" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 5759ec0b1cbb9153558d52f55b874ab8b45880bb
  Author: Lucian Petrut 
  Date:   Fri Jan 4 10:07:57 2019 +

  glance Windows support
  
  This change will allow glance services to run on Windows, using
  eventlet wsgi for API services.
  
  This change will:
  * avoid monkey patching the os module on Windows (which causes Popen
to fail)
  * avoiding unavailable signals
  * avoid renaming in-use files or leaking handles
  * update the check that ensures that just one scrubber process may
run at a time. We can't rely on process names as there might be
wrapper processes that have similar names (no she-bangs on Windows,
so the scripts are called a bit differently). We'll use a global
named mutex instead.
  
  A subsequent change will leverage Windows job objects as a
  replacement for process groups, also avoiding forking when spawning
  workers.
  
  At the moment, some Glance tests cannot run on Windows, which is
  also covered by subsequent patches.
  
  DocImpact
  
  blueprint windows-support
  
  Change-Id: I3bca69638685ceb11a1a316511ad9a298c630ad5

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1820039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1820018] [NEW] instance breaks the affinity/anti-affinity of server group with force_hosts or force_nodes

2019-03-14 Thread Boxiang Zhu
Public bug reported:

Description
===
Boot a server with server-group(affinity/anti-affinity) and force_host. The 
second instance broken 
the affinity/anti-affinity.


Steps to reproduce
==
We create a server-group(with anti-affinity policy). The we begin to test, as 
followed:
1. Create the first instance with the server-group
2. Create the second instance with the server-group and with force_host(the 
host of first instance)

Command:
1. nova boot  --flavor  --image  
--availability-zone  --security-groups  --nic net-id= 
--hint group=
2. nova boot  --flavor  --image  
--availability-zone : --security-groups  --nic 
net-id= --hint group=


Expected result
===
?? Did we expect that the second instance fails to boot.


Actual result
=
The second instance succeeds to boot.


Environment
===
1. The openstack version is master. And setup the env by devstack tool.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1820018

Title:
  instance breaks the affinity/anti-affinity of server group with
  force_hosts or force_nodes

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Boot a server with server-group(affinity/anti-affinity) and force_host. The 
second instance broken 
  the affinity/anti-affinity.

  
  Steps to reproduce
  ==
  We create a server-group(with anti-affinity policy). The we begin to test, as 
followed:
  1. Create the first instance with the server-group
  2. Create the second instance with the server-group and with force_host(the 
host of first instance)

  Command:
  1. nova boot  --flavor  --image  
--availability-zone  --security-groups  --nic net-id= 
--hint group=
  2. nova boot  --flavor  --image  
--availability-zone : --security-groups  --nic 
net-id= --hint group=

  
  Expected result
  ===
  ?? Did we expect that the second instance fails to boot.

  
  Actual result
  =
  The second instance succeeds to boot.

  
  Environment
  ===
  1. The openstack version is master. And setup the env by devstack tool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1820018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp