[Yahoo-eng-team] [Bug 1760696] Re: Problem with datasources in the cloud

2018-07-12 Thread Launchpad Bug Tracker
[Expired for cloud-init because there has been no activity for 60 days.]

** Changed in: cloud-init
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1760696

Title:
  Problem with datasources in the cloud

Status in cloud-init:
  Expired

Bug description:
  Got this message when loggin to a new VM:
  **
  # A new feature in cloud-init identified possible datasources for#
  # this system as:#
  #   ['Ec2', 'None']  #
  # However, the datasource used was: CloudStack   #
  ##
  # In the future, cloud-init will only attempt to use datasources that#
  # are identified or specifically configured. #
  # For more information see   #
  #   https://bugs.launchpad.net/bugs/1669675  #
  ##
  # If you are seeing this message, please file a bug against  #
  # cloud-init at  #
  #https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid  #
  # Make sure to include the cloud provider your instance is   #
  # running on.#
  ##
  # After you have filed a bug, you can disable this warning by launching  #
  # your instance with the cloud-config below, or putting that content #
  # into /etc/cloud/cloud.cfg.d/99-warnings.cfg#
  ##
  # #cloud-config  #
  # warnings:  #
  #   dsid_missing_source: off #
  **
  I am using the InterNuvem at University of São Paulo

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1760696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1769227] Re: Openstack quota update doesnt work for --secgroups.

2018-07-12 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1769227

Title:
  Openstack quota update doesnt work for --secgroups.

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Openstack release: Mitaka
  I am trying to update the "secgroups" quotas for project. Below is what I 
followed:
  1> $openstack quota set --secgroups 60 ProjectX ---> And then when I tried 
$openstack quota show ProjectX, it did not update the Field "secgroups | 10" 
and it was still 10.

  2> Then I tried to use $nova quota-update --security-groups 60
  ProjectX. And then again $openstack quota show ProjectX . This also
  did not update the secgroups field.

  3> $nova quota-show --tenant ProjectX -> However when I used this
  command it gave the output "| security_groups | 60 | " which is
  correct. But "openstack quota show command" doesnt give the right
  information.

  4> From the horizon dashboard, the security group quotas is still 10.

  Their is mismatch of the values of "secgroups" with different
  commands(openstack and nova) and dashboard also did not reflect the
  correct value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1769227/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525775] Re: When ovs-agent is restarted flows creatd by other than ovs-agent are deleted.

2018-07-12 Thread YAMAMOTO Takashi
https://review.openstack.org/#/c/480031/

** Changed in: tap-as-a-service
   Importance: Undecided => Critical

** Changed in: tap-as-a-service
   Status: New => Fix Released

** Changed in: tap-as-a-service
 Assignee: (unassigned) => SUZUKI, Kazuhiro (kaz-kaz)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525775

Title:
  When ovs-agent is restarted flows creatd by other than ovs-agent are
  deleted.

Status in neutron:
  Won't Fix
Status in tap-as-a-service:
  Fix Released

Bug description:
  When ovs-agent is restarted, the cleanup logic drops flow entries
  unless they are stamped by agent_uuid (recorded as a cookie).

  Referene:
  
https://git.openstack.org/cgit/openstack/neutron/commit/?id=73673beacd75a2d9f51f15b284f1b458d32e992e

  Not only old flows, but also flows created by other than ovs-agent
  (flows without a stamp) are deleted.

  Version: Liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1525775/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775310] Re: Unused namespace is appeared.

2018-07-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/579058
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=f5f682c63a043d59388731d78fae98481685019e
Submitter: Zuul
Branch:master

commit f5f682c63a043d59388731d78fae98481685019e
Author: Yasuhiro Kimura 
Date:   Fri Jun 29 14:19:02 2018 +0900

Modify logic of l3-agent to be notified

Although no DVR router is hosted on controller node,
neutron-server notifies l3-agent on compute node.
As a result, the following bug happen when VM migrate or resize.
 1) L3-agent on compute node output error logs
 when VM connect no DVR and HA router.
 2) Unused namespace is appeared on compute node
 when VM connect no DVR and no HA router.
So, I modify logic of l3-agent to be notified
after migrating or resizing.

Change-Id: I53955b885f53d4ee15377a08627c2c25cb1b7041
Closes-Bug: #1775310


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775310

Title:
  Unused namespace is appeared.

Status in neutron:
  Fix Released

Bug description:
  Although no DVR router is hosted on controller node, neutron-server notifies 
l3-agent on compute node.
  As a result, the following bug happen when VM migrate or resize.
   1) L3-agent on compute node output error logs when VM connect no DVR and HA 
router.
   2) Unused namespace is appeared on compute node when VM connect no DVR and 
no HA router.

  This bug does not happen in the single compute node case.
  However, this bug happen in the multi compute node case.

  Environment Information:
  - OpenStack version is stable/queens and master.
  - Multi compute node.
  - DVR and L3HA is enable.
  I confirmed this bug by the following step.

  ==
  1)L3-agent on compute node output error logs when creating no DVR and HA 
router.

  Creating non DVR/HA router. router and connecting private network and 
external gateway.
  stack@ctl-001:~/devstack$ openstack router create --centralized 
nodvr_ha_router
  stack@ctl-001:~/devstack$ openstack router set --external-gateway public 
nodvr_ha_router
  stack@ctl-001:~/devstack$ openstack router add subnet nodvr_ha_router 
private-subnet

  Creating VM on same compute node and same private network.
  stack@ctl-001:~/devstack$ openstack server create --availability-zone 
nova:cmp-001 --flavor cirros256 --image cirros-0.3.5-x86_64-disk --network 
private vm1
  stack@ctl-001:~/devstack$ openstack server create --availability-zone 
nova:cmp-001 --flavor cirros256 --image cirros-0.3.5-x86_64-disk --network 
private vm2

  Adding floating ip to VM.
  stack@ctl-001:~/devstack$ openstack floating ip create public
  stack@ctl-001:~/devstack$ openstack server add floating ip vm1 192.168.20.54

  Migrating VM.
  stack@ctl-001:~/devstack$ openstack server migrate vm1
  stack@ctl-001:~/devstack$ openstack server resize --confirm vm1

  After migrating,
  L3-agent on compute node output following error logs.
  --
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
[-] Failed to process compatible router: 15bcec0d-07ad-4b64-8de9-2527e22744c7: 
Exception: Unable to process HA router 15bcec0d-07ad-4b64-8de9-2527e22744c7 
without HA port
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
Traceback (most recent call last):
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 560, in 
_process_router_update
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 479, in 
_process_router_if_compatible
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
self._process_added_router(router)
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 484, in 
_process_added_router
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
self._router_added(router['id'], router)
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 375, in _router_added
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
router_id)
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, 
in __exit__
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
self.force_reraise()
  Jun 15 11:31:07 cmp-001 neutron-l3-agent[31636]: ERROR neutron.agent.l3.agent 
File "/usr/loca

[Yahoo-eng-team] [Bug 1757482] Re: IP address for a router interface allowed outside the allocation range of subnet

2018-07-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/575444
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=54aa6e81cb17b33ce4d5d469cc11dec2869c762d
Submitter: Zuul
Branch:master

commit 54aa6e81cb17b33ce4d5d469cc11dec2869c762d
Author: Miguel Lavalle 
Date:   Thu Jun 14 09:21:09 2018 -0500

Disallow router interface out of subnet IP range

Currently, a non privileged tenant can add a router interface to a
shared / external network's subnet with an IP address outside the
subnet's allocation pool, creating a security risk. This patch prevents
tenants who are not the subnet's owner or admin from assigning a router
interface an IP address outside the subnet's allocation pool.

Change-Id: I32e76a83443dd8e7d79b396499747f29b4762e92
Closes-Bug: #1757482


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1757482

Title:
  IP address for a router interface allowed outside the allocation range
  of subnet

Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  Currently running Queens on Ubuntu 16.04 with the linuxbridge ml2
  plugin with vxlan overlays.  We have a single, large provider network
  that we have set to 'shared' and 'external', so people who need to do
  things that don't work well with NAT can connect their instances
  directly to the provider network.  Our 'allocation range' as defined
  in our provider subnet is dedicated to tenants, so there should be no
  conflicts.

  One of our users connected a neutron router to the provider network
  (not via the 'external network' option, but rather via the normal 'add
  interface' option) and neglected to specify an IP address.  The
  neutron router decided that it was now the gateway for the entire
  provider network and began arp'ing.

  This seems like it should be disallowed inside of neutron (you
  shouldn't be able to specify an IP address for a router interface that
  isn't explicitly part of your allocation range on said subnet).
  Unless neutron just expect issues like this to be handled by the
  physical provider infrastructure (spoofing prevention, etc.)?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1757482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1781156] Re: Randomly choose from multiple IPv6 stateless subnets

2018-07-12 Thread Hongbin Lu
By referring to the multiple ipv6 prefixes spec [1], it is intentional
to include all the stateless IPv6 addresses into the port. The current
implementation seems to be correct. In addition, you can look at this
bug [2], in which people was complaining that neutron was not picking
all the SLAAC addresses.

[1] 
https://specs.openstack.org/openstack/neutron-specs/specs/kilo/multiple-ipv6-prefixes.html
[2] https://bugs.launchpad.net/neutron/+bug/1358709

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1781156

Title:
  Randomly choose from multiple IPv6 stateless subnets

Status in neutron:
  Invalid

Bug description:
  If we create a dual-stack network, both with two IPv4 and IPv6
  subnets, we will get one IPv4 and two IPv6 addresses:

  # neutron port-create test
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:host_id   | 
|
  | binding:profile   | {}  
|
  | binding:vif_details   | {}  
|
  | binding:vif_type  | unbound 
|
  | binding:vnic_type | normal  
|
  | created_at| 2018-07-11T07:58:14Z
|
  | description   | 
|
  | device_id | 
|
  | device_owner  | 
|
  | extra_dhcp_opts   | 
|
  | fixed_ips | {"subnet_id": 
"36c4f1cc-4043-4d62-a6ee-db5704dc929a", "ip_address": "30.20.30.3"} 
  |
  |   | {"subnet_id": 
"1bb187de-dce2-429f-8e0f-f0e5357d5f49", "ip_address": 
"fdf8:f53b:82e4:0:f816:3eff:fe05:ef6e"} |
  |   | {"subnet_id": 
"b7624e84-b956-41d2-a4d6-ca6a150200fc", "ip_address": 
"fdf8:f53c:82e4:0:f816:3eff:fe05:ef6e"} |
  | id| b7bc35ea-8c26-4fab-9ef4-8b009d3cba4a
|
  | mac_address   | fa:16:3e:05:ef:6e   
|
  | name  | 
|
  | network_id| 75f2f23a-ab63-4560-bd61-92023700840d
|
  | port_security_enabled | True
|
  | project_id| 213ea3d880074bbdab84918d70747a20
|
  | qos_policy_id | 
|
  | revision_number   | 2   
|
  | security_groups   | 04efec82-a93c-4d19-ad52-a34a7e1a558c
|
  | status| DOWN
|
  | tags  |   

[Yahoo-eng-team] [Bug 1771885] Re: bionic: static maas missing search domain in systemd-resolve configuration

2018-07-12 Thread Andres Rodriguez
I'm re-opening this task for MAAS, as a user has been able to reproduce
this issue in a different context. While there's a work-around on
comment #22, the situation is that even when the same network
configuration sent for xenial and bionic deployments is the same, the
configuration differs, due to how cloud-init handles netplan
configuration.

More specifically, in Xenial, when MAAS sends DNS config for both
"global" and per-interface/subnet configuration, the resulting config is
that the machine will have an aggregation of the configuration for DNS.

However, when deploying Bionic, when MAAS sends the same exact
configuration, cloud-init interpret's it different and *only* the
network configuration of an interface is taken into consideration, while
the global is ignored.

As such, this *only* becomes an issue when the user overrides the DNS on
specific subnet, which results in the search domain not being considered
from the global config.

As such, we will make an improvement in MAAS to ensure that the search
domain is always included regardless.

** Changed in: maas
   Status: Invalid => New

** Changed in: maas
Milestone: None => 2.5.0

** Also affects: maas/2.3
   Importance: Undecided
   Status: New

** Also affects: maas/2.4
   Importance: Undecided
   Status: New

** Changed in: maas/2.3
Milestone: None => 2.4.1

** Changed in: maas/2.4
Milestone: None => 2.3.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1771885

Title:
  bionic: static maas missing search domain in systemd-resolve
  configuration

Status in cloud-init:
  Won't Fix
Status in juju:
  Fix Released
Status in juju 2.3 series:
  Fix Released
Status in MAAS:
  New
Status in MAAS 2.3 series:
  New
Status in MAAS 2.4 series:
  New

Bug description:
  juju: 2.4-beta2  
  MAAS: 2.3.0

  Testing deployment of LXD containers on bionic (specifically for an
  openstack deployment) lead to this problem:

  https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1765405

  Summary:

  previously, the DNS config in the LXD containers were the same as the
  host machines

  now, the DNS config is in systemd, the DNS server is set correctly,
  but the search domain is missing, so hostnames won't resolve.

  Working resolv.conf on xenial lxd container:

  nameserver 10.245.168.6
  search maas

  Non-working "systemd-resolve --status":

  ...
  Link 21 (eth0)
Current Scopes: DNS
 LLMNR setting: yes
  MulticastDNS setting: no
DNSSEC setting: no
  DNSSEC supported: no
   DNS Servers: 10.245.168.6

  Working (now able to resolve hostnames after modifying netplan and
  adding search domain):

  Link 21 (eth0)
Current Scopes: DNS
 LLMNR setting: yes
  MulticastDNS setting: no
DNSSEC setting: no
  DNSSEC supported: no
   DNS Servers: 10.245.168.6
DNS Domain: maas

  ubuntu@juju-6406ff-2-lxd-2:/etc$ host node-name
  node-name.maas has address 10.245.168.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1771885/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1780139] Re: Sending SIGHUP to neutron-server process causes it to hang

2018-07-12 Thread Brent Eagles
** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => Critical

** Changed in: tripleo
Milestone: None => rocky-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1780139

Title:
  Sending SIGHUP to neutron-server process causes it to hang

Status in neutron:
  Triaged
Status in tripleo:
  Triaged

Bug description:
  * High level description

  When sending SIGHUP to the neutron-server process in a neutron_api container, 
it looks like the main process locks up in a tight loop. strace output shows 
that it's waiting for a process that doesn't exist:
  wait4(0, 0x7ffe97e025a4, WNOHANG, NULL) = -1 ECHILD (No child processes)

  This is problematic, because logrotate uses SIGHUP in the
  containerized environment. It doesn't happen always: it might take one
  or two signals reasonably interspersed, to trigger this.

  * Pre-conditions:

  I'm using CentOS 7 + Queens RDO
  "rdo_version": "c9fd24040454913b4a325741094285676fb7e7bc_a0a28280"

  I first noticed the issue when the neutron_api docker would stop
  working on the control nodes, eventually it was traced back to the
  logrotate_crond container sending SIGHUP to all the processes owning
  log files in /var/log/containers. This doesn't happen every time, but
  it's pretty easy to trigger on my system.

  * Step-by-step reproduction steps:

  # Start with a clean container

  docker restart neutron_api

  # Identify the neutron-server PID: (613782 in this case) and send
  SIGHUP

  kill -HUP 613782

  # the relevant log files generally look clean the first time:

  2018-07-04 16:50:34.730 7 INFO oslo_service.service [-] Caught SIGHUP, 
stopping children
  2018-07-04 16:50:34.739 7 INFO neutron.common.config [-] Logging enabled!
  2018-07-04 16:50:34.740 7 INFO neutron.common.config [-] 
/usr/bin/neutron-server version 12.0.3.dev17
  2018-07-04 16:50:34.761 33 INFO neutron.wsgi [-] (33) wsgi exited, 
is_accepting=True
  2018-07-04 16:50:34.761 27 INFO neutron.wsgi [-] (27) wsgi exited, 
is_accepting=True
  2018-07-04 16:50:34.761 28 INFO neutron.wsgi [-] (28) wsgi exited, 
is_accepting=True
  2018-07-04 16:50:34.761 30 INFO neutron.wsgi [-] (30) wsgi exited, 
is_accepting=True
  2018-07-04 16:50:34.761 7 INFO oslo_service.service [-] Caught SIGHUP, 
stopping children
  2018-07-04 16:50:34.761 32 INFO neutron.wsgi [-] (32) wsgi exited, 
is_accepting=True
  2018-07-04 16:50:34.761 34 INFO neutron.wsgi [-] (34) wsgi exited, 
is_accepting=True
  2018-07-04 16:50:34.761 29 INFO neutron.wsgi [-] (29) wsgi exited, 
is_accepting=True
  2018-07-04 16:50:34.761 31 INFO neutron.wsgi [-] (31) wsgi exited, 
is_accepting=True
  2018-07-04 16:50:34.771 7 INFO neutron.common.config [-] Logging enabled!
  2018-07-04 16:50:34.771 7 INFO neutron.common.config [-] 
/usr/bin/neutron-server version 12.0.3.dev17
  2018-07-04 16:50:34.792 7 INFO neutron.common.config [-] Logging enabled!
  2018-07-04 16:50:34.792 7 INFO neutron.common.config [-] 
/usr/bin/neutron-server version 12.0.3.dev17
  2018-07-04 16:50:34.807 7 INFO oslo_service.service [-] Child 27 exited with 
status 0
  2018-07-04 16:50:34.807 7 WARNING oslo_service.service [-] pid 27 not in 
child list
  2018-07-04 16:50:35.761 7 INFO oslo_service.service [-] Child 28 exited with 
status 0
  2018-07-04 16:50:35.764 7 INFO oslo_service.service [-] Child 29 exited with 
status 0
  2018-07-04 16:50:35.767 7 INFO oslo_service.service [-] Child 30 exited with 
status 0
  2018-07-04 16:50:35.768 78 INFO neutron.wsgi [-] (78) wsgi starting up on 
http://10.0.105.101:9696
  2018-07-04 16:50:35.771 79 INFO neutron.wsgi [-] (79) wsgi starting up on 
http://10.0.105.101:9696
  2018-07-04 16:50:35.770 7 INFO oslo_service.service [-] Child 31 exited with 
status 0
  2018-07-04 16:50:35.773 7 INFO oslo_service.service [-] Child 32 exited with 
status 0
  2018-07-04 16:50:35.774 80 INFO neutron.wsgi [-] (80) wsgi starting up on 
http://10.0.105.101:9696
  2018-07-04 16:50:35.776 7 INFO oslo_service.service [-] Child 33 exited with 
status 0
  2018-07-04 16:50:35.777 81 INFO neutron.wsgi [-] (81) wsgi starting up on 
http://10.0.105.101:9696
  2018-07-04 16:50:35.780 82 INFO neutron.wsgi [-] (82) wsgi starting up on 
http://10.0.105.101:9696
  2018-07-04 16:50:35.779 7 INFO oslo_service.service [-] Child 34 exited with 
status 0
  2018-07-04 16:50:35.782 7 INFO oslo_service.service [-] Child 43 exited with 
status 0
  2018-07-04 16:50:35.783 83 INFO neutron.wsgi [-] (83) wsgi starting up on 
http://10.0.105.101:9696
  2018-07-04 16:50:35.783 7 WARNING oslo_service.service [-] pid 43 not in 
child list
  2018-07-04 16:50:35.786 84 INFO neutron.wsgi [-] (84) wsgi starting up on 
http://10.0.105.101:9696

  # But on the second SIGHUP, the following happened:

  2018-07-04 16:52:08.821 7 INFO oslo_service.servi

[Yahoo-eng-team] [Bug 1780159] Re: Some inherited projects missing when listing user's projects

2018-07-12 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/581346
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=83e72d74431526b27b8a2f4ac362582a73edea44
Submitter: Zuul
Branch:master

commit 83e72d74431526b27b8a2f4ac362582a73edea44
Author: Sami MAKKI 
Date:   Tue Jul 10 14:21:28 2018 +0200

Invalidate 'computed assignments' cache when creating a project.

Without it, listing projects results were missing project on which the
user had an inherited role.

Change-Id: If8edb3d1d1d3a0dab691ab6c81dd4b42e3b10ab3
Closes-Bug: #1780159


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1780159

Title:
  Some inherited projects missing when listing user's projects

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When a project is added as a child to another project and a user has
  an inherited role as well as an explicit role on that parent project,
  the child project may not appear when the user lists their projects.

  It appears that the order in which the inherited and effective role
  assignments are made makes a difference.

  What actually happens:

  # The parent
  $ openstack project show parent --children
  +-++
  | Field   | Value  |
  +-++
  | description ||
  | domain_id   | default|
  | enabled | True   |
  | id  | da2265680b3844eaa241a14ac9ee07f1   |
  | is_domain   | False  |
  | name| parent |
  | parent_id   | default|
  | subtree | {'3e5e4084c9984d55935198eed49f7164': None} |
  | tags| [] |
  +-++

  # A first child
  $ openstack project show 3e5e4084c9984d55935198eed49f7164
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | domain_id   | default  |
  | enabled | True |
  | id  | 3e5e4084c9984d55935198eed49f7164 |
  | is_domain   | False|
  | name| child|
  | parent_id   | da2265680b3844eaa241a14ac9ee07f1 |
  | tags| []   |
  +-+--+

  
  # Next, we give user mradmin the project_admin role on the parent project 
explicitly.
  $ openstack role add --project parent --user mradmin  project_admin

  # We give user mradmin the project_admin role on the parent project's subtree 
via inheritance.
  $ openstack role add --project parent --user mradmin  --inherited 
project_admin

  
  # When we list the projects as user mradmin, everything is fine for now.
  $ openstack project list
  +--++
  | ID   | Name   |
  +--++
  | 3e5e4084c9984d55935198eed49f7164 | child  |
  | da2265680b3844eaa241a14ac9ee07f1 | parent |
  +--++

  * Important note: the first child project exists before we do the role
  assignments. The second child project is added after the role
  assignments.


  # Add a second child project to the parent project:
  $ openstack project create --parent parent child2
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | domain_id   | default  |
  | enabled | True |
  | id  | c781f589110c4d07a96c40b50bc6bd19 |
  | is_domain   | False|
  | name| child2   |
  | parent_id   | da2265680b3844eaa241a14ac9ee07f1 |
  | tags| []   |
  +-+--+

  
  # The second child does not appear when we list the projects as user mradmin
  $ openstack project list
  +--++
  | ID   | Name   |
  +--++
  | 3e5e4084c9984d55935198eed49f7164 | child  |
  | da2265680b3844eaa241a14ac9ee07f1 | parent |
  +--++


  
  If we repeat the above except we re

[Yahoo-eng-team] [Bug 1781368] Re: KeyError: 'flavor' when nova boot

2018-07-12 Thread melanie witt
The KeyError is coming from python-novaclient, not nova, so I'm going to
mark this Invalid for nova and add python-novaclient to this bug.

** Changed in: nova
   Status: New => Invalid

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1781368

Title:
  KeyError: 'flavor' when nova boot

Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  New

Bug description:
  It maybe occur KeyError flavor when nova boot vm

  # nova --debug boot vm --image image-id --flavor flavor-id --nic net-
  id=id

  It will show vm's information, but it return Error like that:

  DEBUG (session:419) RESP: [202] Content-Length: 462 Location: 
controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748
 Content-Type: application/json Openstack-Api-Version: compute 2.60 
X-Openstack-Nova-Api-Version: 2.60 Vary: OpenStack-API-Version, 
X-OpenStack-Nova-API-Version X-Openstack-Request-Id: 
req-07b4cdd4-2a22-4791-a6b1-7ea4b7628370 X-Compute-Request-Id: 
req-07b4cdd4-2a22-4791-a6b1-7ea4b7628370 Date: Thu, 12 Jul 2018 06:34:49 GMT
  RESP BODY: {"server": {"security_groups": [{"name": "default"}], 
"OS-DCF:diskConfig": "MANUAL", "id": "c095438d-7a8a-4b00-a53d-882a1f988748", 
"links": [{"href": 
"controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748",
 "rel": "self"}, {"href": 
"controller:8774/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748",
 "rel": "bookmark"}], "adminPass": "2oHes4hprNs8"}}

  DEBUG (session:722) POST call to compute for 
controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers used request id 
req-07b4cdd4-2a22-4791-a6b1-7ea4b7628370
  DEBUG (session:372) REQ: curl -g -i -X GET 
controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748
 -H "OpenStack-API-Version: compute 2.60" -H "User-Agent: python-novaclient" -H 
"Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.60" -H 
"X-Auth-Token: {SHA1}4dc236317ec12f176dd4e844738ec85cee5c12df"
  DEBUG (connectionpool:395) controller:8774 "GET 
/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748
 HTTP/1.1" 404 111
  DEBUG (session:419) RESP: [404] Openstack-Api-Version: compute 2.60 
X-Openstack-Nova-Api-Version: 2.60 Vary: OpenStack-API-Version, 
X-OpenStack-Nova-API-Version Content-Type: application/json; charset=UTF-8 
Content-Length: 111 X-Openstack-Request-Id: 
req-10da61f5-3624-4b56-8c7b-849eb163b0fc X-Compute-Request-Id: 
req-10da61f5-3624-4b56-8c7b-849eb163b0fc Date: Thu, 12 Jul 2018 06:34:49 GMT
  RESP BODY: {"itemNotFound": {"message": "Instance 
c095438d-7a8a-4b00-a53d-882a1f988748 could not be found.", "code": 404}}

  DEBUG (session:722) GET call to compute for 
controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748
 used request id req-10da61f5-3624-4b56-8c7b-849eb163b0fc
  DEBUG (shell:793) 'flavor'
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 791, in 
main
  OpenStackComputeShell().main(argv)
File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 713, in 
main
  args.func(self.cs, args)
File "/usr/lib/python2.7/site-packages/novaclient/v2/shell.py", line 889, 
in do_boot
  _print_server(cs, args, server)
File "/usr/lib/python2.7/site-packages/novaclient/v2/shell.py", line 2240, 
in _print_server
  del info['flavor']
  KeyError: 'flavor'
  ERROR (KeyError): 'flavor'

  Environment
  ===
  openstack-nova-common-17.0.5-1.el7.noarch
  python-nova-17.0.5-1.el7.noarch
  openstack-nova-compute-17.0.5-1.el7.noarch
  python2-novaclient-10.1.0-1.el7.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1781368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1781439] [NEW] Test & document 1.28 (consumer gen) changes for /resource_providers/{u}/allocations

2018-07-12 Thread Eric Fried
Public bug reported:

I978fdea51f2d6c2572498ef80640c92ab38afe65 /
https://review.openstack.org/#/c/565604/ added placement microversion
1.28, which made various API operations consumer generation-aware. One
of the affected routes was /resource_providers/{u}/allocations - but
this route wasn't covered in the gabbi tests or the API reference
documentation.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1781439

Title:
  Test & document 1.28 (consumer gen) changes for
  /resource_providers/{u}/allocations

Status in OpenStack Compute (nova):
  New

Bug description:
  I978fdea51f2d6c2572498ef80640c92ab38afe65 /
  https://review.openstack.org/#/c/565604/ added placement microversion
  1.28, which made various API operations consumer generation-aware. One
  of the affected routes was /resource_providers/{u}/allocations - but
  this route wasn't covered in the gabbi tests or the API reference
  documentation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1781439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1781430] [NEW] AllocationList.delete_all() incorrectly assumes a single consumer

2018-07-12 Thread Jay Pipes
Public bug reported:

AllocationList.delete_all() looks like this:

```
def delete_all(self):
# Allocations can only have a single consumer, so take advantage of
# that fact and do an efficient batch delete
consumer_uuid = self.objects[0].consumer.uuid
_delete_allocations_for_consumer(self._context, consumer_uuid)
consumer_obj.delete_consumers_if_no_allocations(
self._context, [consumer_uuid])
```

the problem with the above is that it is based on an old assumption:
that a list of allocations  will only ever involve a single consumer.
This hasn't been the case ever since we introduced the ability to do
`POST /allocations` which was 1.12 or 1.13 IIRC.

The safety concern about the above is that if someone in code does this:

```
allocs = AllocationList.get_all_by_resource_provider(self.ctx, 
compute_node_provider)
allocs.delete_all()
```

they would reasonable expect all of the allocations for a provider to be
deleted. However, this is not the case. Only the allocations against the
"first" consumer will be deleted.

The fix is simple... check to see if there are >1 consumers in the
allocation list's objects and if so, don't call
_delete_allocations_for_consumer(). Instead, call
_delete_allocations_by_id() and do a DELETE FROM allocations WHERE id IN
(...).

** Affects: nova
 Importance: High
 Assignee: Jay Pipes (jaypipes)
 Status: Triaged


** Tags: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1781430

Title:
  AllocationList.delete_all() incorrectly assumes a single consumer

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  AllocationList.delete_all() looks like this:

  ```
  def delete_all(self):
  # Allocations can only have a single consumer, so take advantage of
  # that fact and do an efficient batch delete
  consumer_uuid = self.objects[0].consumer.uuid
  _delete_allocations_for_consumer(self._context, consumer_uuid)
  consumer_obj.delete_consumers_if_no_allocations(
  self._context, [consumer_uuid])
  ```

  the problem with the above is that it is based on an old assumption:
  that a list of allocations  will only ever involve a single consumer.
  This hasn't been the case ever since we introduced the ability to do
  `POST /allocations` which was 1.12 or 1.13 IIRC.

  The safety concern about the above is that if someone in code does
  this:

  ```
  allocs = AllocationList.get_all_by_resource_provider(self.ctx, 
compute_node_provider)
  allocs.delete_all()
  ```

  they would reasonable expect all of the allocations for a provider to
  be deleted. However, this is not the case. Only the allocations
  against the "first" consumer will be deleted.

  The fix is simple... check to see if there are >1 consumers in the
  allocation list's objects and if so, don't call
  _delete_allocations_for_consumer(). Instead, call
  _delete_allocations_by_id() and do a DELETE FROM allocations WHERE id
  IN (...).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1781430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497253] Re: different availability zone for nova and cinder when AZ is not explicitly given

2018-07-12 Thread Matt Riedemann
I've confirmed this with a devstack setup that this was fixed indirectly
in pike with change https://review.openstack.org/#/c/446053/.

** Changed in: nova
   Status: In Progress => Fix Released

** Changed in: nova
 Assignee: Matt Riedemann (mriedem) => Dan Smith (danms)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497253

Title:
  different availability zone for nova and cinder when AZ is not
  explicitly given

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When booting an instance from image, which should create new volume,
  when AZ zone is not specified, instance could end in different AZ than
  the image.

  That doesn't hurt with cross_az_attach=true, but if this is set to
  False, creating the volume will fail with

  " Instance %(instance)s and volume %(vol)s are not in the same
  availability_zone" error.

  
  Nova actually decides at some point, which AZ it should use (when none was 
specified), so I think we just need to move this decision before the point when 
the volume is created, so nova can pass correct AZ value to cinder API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1781421] [NEW] CantStartEngineError due to host aggregate up-call when boot from volume and [cinder]/cross_az_attach=False

2018-07-12 Thread Matt Riedemann
Public bug reported:

This is semi-related to bug 1497253 but I found it while triaging that
bug to see if it was still an issue since Pike (I don't think it is).

If you run devstack with default superconductor mode configuration, and
configure nova-cpu.conf with:

[cinder]
cross_az_attach=False

Then try to boot from volume where nova-compute creates the volume, it
fails with CantStartEngineError because the cell conductor (n-cond-
cell1.service) is not configured to reach the API DB to get host
aggregate information.

Here is a nova boot command to recreate:

$ nova boot --flavor cirros256 --block-device id=e642acfd-4283-458a-
b7ea-
6c316da3b2ce,source=image,dest=volume,shutdown=remove,size=1,bootindex=0
--poll test-bfv

Where the block device id is the uuid of the cirros image in the
devstack env.

This is the failure in the nova-compute logs:

http://paste.openstack.org/show/725723/

972-4b14-93ad-e7b86edc3a26 service nova] [instance: 
910509b9-e23a-4b40-bb42-0df7b65bb36e] Getting AZ for instance; instance.host: 
rocky; instance.availabilty_zone: nova
3-c972-4b14-93ad-e7b86edc3a26 service nova] [instance: 
910509b9-e23a-4b40-bb42-0df7b65bb36e] Instance failed block device setup: 
RemoteError: Remote error: CantStartEngineEr
  File "/opt/stack/nova/nova/conductor/manager.py", line 124, in 
_object_dispatch\nreturn getattr(target, method)(*args, **kwargs)\n', u'  
File "/usr/local/lib/python2.7
b9-e23a-4b40-bb42-0df7b65bb36e] Traceback (most recent call last):
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1564, in _prep_block_device
b9-e23a-4b40-bb42-0df7b65bb36e] 
wait_func=self._await_block_device_map_created)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 854, in attach_block_devices
b9-e23a-4b40-bb42-0df7b65bb36e] _log_and_attach(device)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 851, in _log_and_attach
b9-e23a-4b40-bb42-0df7b65bb36e] bdm.attach(*attach_args, **attach_kwargs)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 747, in attach
b9-e23a-4b40-bb42-0df7b65bb36e] context, instance, volume_api, virt_driver)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped
b9-e23a-4b40-bb42-0df7b65bb36e] ret_val = method(obj, context, *args, 
**kwargs)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 623, in attach
b9-e23a-4b40-bb42-0df7b65bb36e] instance=instance)
b9-e23a-4b40-bb42-0df7b65bb36e]   File "/opt/stack/nova/nova/volume/cinder.py", 
line 504, in check_availability_zone
b9-e23a-4b40-bb42-0df7b65bb36e] instance_az = 
az.get_instance_availability_zone(context, instance)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/opt/stack/nova/nova/availability_zones.py", line 194, in 
get_instance_availability_zone
b9-e23a-4b40-bb42-0df7b65bb36e] az = get_host_availability_zone(elevated, 
host)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/opt/stack/nova/nova/availability_zones.py", line 95, in 
get_host_availability_zone
b9-e23a-4b40-bb42-0df7b65bb36e] key='availability_zone')
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
177, in wrapper
b9-e23a-4b40-bb42-0df7b65bb36e] args, kwargs)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 241, in 
object_class_action_versions
b9-e23a-4b40-bb42-0df7b65bb36e] args=args, kwargs=kwargs)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
179, in call
b9-e23a-4b40-bb42-0df7b65bb36e] retry=self.retry)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 133, 
in _send
b9-e23a-4b40-bb42-0df7b65bb36e] retry=retry)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 584, in send
b9-e23a-4b40-bb42-0df7b65bb36e] call_monitor_timeout, retry=retry)
b9-e23a-4b40-bb42-0df7b65bb36e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 575, in _send
b9-e23a-4b40-bb42-0df7b65bb36e] raise result
b9-e23a-4b40-bb42-0df7b65bb36e] RemoteError: Remote error: CantStartEngineError 
No sql_connection parameter is established
b9-e23a-4b40-bb42-0df7b65bb36e] [u'Traceback (most recent call last):\n', u'  
File "/opt/stack/nova/nova/conductor/manager.py", line 124, in 
_object_dispatch\nreturn get
b9-e23a-4b40-bb42-0df7b65bb36e] 

The logging at the start is my own for debug:

972-4b14-93ad-e7b86edc3a26 service nova] [instance: 910509b9-e23a-
4b40-bb42-0df7b65bb36e] Getting AZ for instance; instance.host: rocky;
instance.availabilty_zone: nova

But it shows that the instance.host and instance.availability_zone are
set. The instance

[Yahoo-eng-team] [Bug 1781391] [NEW] cellv2_delete_host when host not found by ComputeNodeList

2018-07-12 Thread Chen
Public bug reported:

Problematic Situation:

1 check the hosts visible to nova compute
nova hypervisor-list
id   hypervisor hostname  state  status
xx compute2 upenbled
 
2 check the hosts visible to cellv2
nova-manage cell_v2 list_hosts
cell name   cell uuid  hostname
cell1  compute1
cell1  compute2
Here compute1 that actually does not exist (like renamed) still remains in 
cell_mappings

3 now try to delete host compute1
nova-manage cell_v2 delete_host --cell_uuid  --host compute1
then the following error is shown:
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1620, in main
ret = fn(*fn_args, **fn_kwargs)
  File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1558, in 
delete_host
nodes = objects.ComputeNodeList.get_all_by_host(cctxt, host)
  File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
result = fn(cls, context, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/nova/objects/compute_node.py", line 
437, in get_all_by_host
use_slave=use_slave)
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 225, 
in wrapper
return f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/nova/objects/compute_node.py", line 
432, in _db_compute_node_get_all_by_host
return db.compute_node_get_all_by_host(context, host)
  File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 297, in 
compute_node_get_all_by_host
return IMPL.compute_node_get_all_by_host(context, host)
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 270, 
in wrapped
return f(context, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 672, 
in compute_node_get_all_by_host
raise exception.ComputeHostNotFound(host=host)
ComputeHostNotFound: Compute host compute1 could not be found.

Not quite sure the exact way to reproduce it, but I think it would be nicer to 
allow the delete_host
operation for situations like this.

** Affects: nova
 Importance: Undecided
 Assignee: Chen (chenn2)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Chen (chenn2)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1781391

Title:
  cellv2_delete_host when host not found by ComputeNodeList

Status in OpenStack Compute (nova):
  New

Bug description:
  Problematic Situation:

  1 check the hosts visible to nova compute
  nova hypervisor-list
  id   hypervisor hostname  state  status
  xx compute2 upenbled
   
  2 check the hosts visible to cellv2
  nova-manage cell_v2 list_hosts
  cell name   cell uuid  hostname
  cell1  compute1
  cell1  compute2
  Here compute1 that actually does not exist (like renamed) still remains in 
cell_mappings

  3 now try to delete host compute1
  nova-manage cell_v2 delete_host --cell_uuid  --host compute1
  then the following error is shown:
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1620, in 
main
  ret = fn(*fn_args, **fn_kwargs)
File "/usr/lib/python2.7/site-packages/nova/cmd/manage.py", line 1558, in 
delete_host
  nodes = objects.ComputeNodeList.get_all_by_host(cctxt, host)
File "/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
  result = fn(cls, context, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/objects/compute_node.py", line 
437, in get_all_by_host
  use_slave=use_slave)
File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
225, in wrapper
  return f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/objects/compute_node.py", line 
432, in _db_compute_node_get_all_by_host
  return db.compute_node_get_all_by_host(context, host)
File "/usr/lib/python2.7/site-packages/nova/db/api.py", line 297, in 
compute_node_get_all_by_host
  return IMPL.compute_node_get_all_by_host(context, host)
File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
270, in wrapped
  return f(context, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 
672, in compute_node_get_all_by_host
  raise exception.ComputeHostNotFound(host=host)
  ComputeHostNotFound: Compute host compute1 could not be found.

  Not quite sure the exact way to reproduce it, but I think it would be nicer 
to allow the delete_host
  operation for situations like this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1781391/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https

[Yahoo-eng-team] [Bug 1781372] [NEW] Neutron security group resource logging presents in ovs-agent.log

2018-07-12 Thread LIU Yulong
Public bug reported:

ENV:
neutron: stable/queens 12.0.2
$ uname -r
3.10.0-862.3.2.el7.x86_64

$ sudo ovs-appctl vlog/list
 consolesyslogfile
 -------
backtrace  OFFDBG   INFO
...
ALL-app  OFFDBG   INFO

Configrations:
neutron.conf
service_plugins = ...,log

openvswitch_agent.ini
[agent]=
extensions = ...,log

[network_log]
rate_limit = 100
burst_limit = 25

Neutron security group resource logging presents in ovs-agent.log:
http://paste.openstack.org/show/725657/, here line 2 is the DROP action log.

As the doc file says "log information is written to the destination if 
configured in local_output_log_base or system journal like /var/log/syslog."
https://docs.openstack.org/neutron/queens/admin/config-logging.html

So, if I disable the ovs-agent DEBUG leve service log, the resource LOG
will not see anymore.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  ENV:
  neutron: stable/queens 12.0.2
  $ uname -r
  3.10.0-862.3.2.el7.x86_64
  
  $ sudo ovs-appctl vlog/list
-  consolesyslogfile
-  -------
+  consolesyslogfile
+  -------
  backtrace  OFFDBG   INFO
  ...
  ALL-app  OFFDBG   INFO
  
  Configrations:
  neutron.conf
  service_plugins = ...,log
  
  openvswitch_agent.ini
  [agent]=
  extensions = ...,log
  
  [network_log]
  rate_limit = 100
  burst_limit = 25
  
- 
  Neutron security group resource logging presents in ovs-agent.log:
  http://paste.openstack.org/show/725657/, here line 2 is the DROP action log.
+ 
+ As the doc file says "log information is written to the destination if 
configured in local_output_log_base or system journal like /var/log/syslog."
+ https://docs.openstack.org/neutron/queens/admin/config-logging.html
+ 
+ So, if I disable the ovs-agent DEBUG leve service log, the resource LOG
+ will not see anymore.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1781372

Title:
  Neutron security group resource logging presents in ovs-agent.log

Status in neutron:
  New

Bug description:
  ENV:
  neutron: stable/queens 12.0.2
  $ uname -r
  3.10.0-862.3.2.el7.x86_64

  $ sudo ovs-appctl vlog/list
   consolesyslogfile
   -------
  backtrace  OFFDBG   INFO
  ...
  ALL-app  OFFDBG   INFO

  Configrations:
  neutron.conf
  service_plugins = ...,log

  openvswitch_agent.ini
  [agent]=
  extensions = ...,log

  [network_log]
  rate_limit = 100
  burst_limit = 25

  Neutron security group resource logging presents in ovs-agent.log:
  http://paste.openstack.org/show/725657/, here line 2 is the DROP action log.

  As the doc file says "log information is written to the destination if 
configured in local_output_log_base or system journal like /var/log/syslog."
  https://docs.openstack.org/neutron/queens/admin/config-logging.html

  So, if I disable the ovs-agent DEBUG leve service log, the resource
  LOG will not see anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1781372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1781368] [NEW] KeyError: 'flavor' when nova boot

2018-07-12 Thread zhaolihui
Public bug reported:

It maybe occur KeyError flavor when nova boot vm

# nova --debug boot vm --image image-id --flavor flavor-id --nic net-
id=id

It will show vm's information, but it return Error like that:

DEBUG (session:419) RESP: [202] Content-Length: 462 Location: 
controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748
 Content-Type: application/json Openstack-Api-Version: compute 2.60 
X-Openstack-Nova-Api-Version: 2.60 Vary: OpenStack-API-Version, 
X-OpenStack-Nova-API-Version X-Openstack-Request-Id: 
req-07b4cdd4-2a22-4791-a6b1-7ea4b7628370 X-Compute-Request-Id: 
req-07b4cdd4-2a22-4791-a6b1-7ea4b7628370 Date: Thu, 12 Jul 2018 06:34:49 GMT
RESP BODY: {"server": {"security_groups": [{"name": "default"}], 
"OS-DCF:diskConfig": "MANUAL", "id": "c095438d-7a8a-4b00-a53d-882a1f988748", 
"links": [{"href": 
"controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748",
 "rel": "self"}, {"href": 
"controller:8774/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748",
 "rel": "bookmark"}], "adminPass": "2oHes4hprNs8"}}

DEBUG (session:722) POST call to compute for 
controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers used request id 
req-07b4cdd4-2a22-4791-a6b1-7ea4b7628370
DEBUG (session:372) REQ: curl -g -i -X GET 
controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748
 -H "OpenStack-API-Version: compute 2.60" -H "User-Agent: python-novaclient" -H 
"Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.60" -H 
"X-Auth-Token: {SHA1}4dc236317ec12f176dd4e844738ec85cee5c12df"
DEBUG (connectionpool:395) controller:8774 "GET 
/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748
 HTTP/1.1" 404 111
DEBUG (session:419) RESP: [404] Openstack-Api-Version: compute 2.60 
X-Openstack-Nova-Api-Version: 2.60 Vary: OpenStack-API-Version, 
X-OpenStack-Nova-API-Version Content-Type: application/json; charset=UTF-8 
Content-Length: 111 X-Openstack-Request-Id: 
req-10da61f5-3624-4b56-8c7b-849eb163b0fc X-Compute-Request-Id: 
req-10da61f5-3624-4b56-8c7b-849eb163b0fc Date: Thu, 12 Jul 2018 06:34:49 GMT
RESP BODY: {"itemNotFound": {"message": "Instance 
c095438d-7a8a-4b00-a53d-882a1f988748 could not be found.", "code": 404}}

DEBUG (session:722) GET call to compute for 
controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748
 used request id req-10da61f5-3624-4b56-8c7b-849eb163b0fc
DEBUG (shell:793) 'flavor'
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 791, in main
OpenStackComputeShell().main(argv)
  File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 713, in main
args.func(self.cs, args)
  File "/usr/lib/python2.7/site-packages/novaclient/v2/shell.py", line 889, in 
do_boot
_print_server(cs, args, server)
  File "/usr/lib/python2.7/site-packages/novaclient/v2/shell.py", line 2240, in 
_print_server
del info['flavor']
KeyError: 'flavor'
ERROR (KeyError): 'flavor'

Environment
===
openstack-nova-common-17.0.5-1.el7.noarch
python-nova-17.0.5-1.el7.noarch
openstack-nova-compute-17.0.5-1.el7.noarch
python2-novaclient-10.1.0-1.el7.noarch

** Affects: nova
 Importance: Undecided
 Assignee: zhaolihui (zhaolh)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => zhaolihui (zhaolh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1781368

Title:
  KeyError: 'flavor' when nova boot

Status in OpenStack Compute (nova):
  New

Bug description:
  It maybe occur KeyError flavor when nova boot vm

  # nova --debug boot vm --image image-id --flavor flavor-id --nic net-
  id=id

  It will show vm's information, but it return Error like that:

  DEBUG (session:419) RESP: [202] Content-Length: 462 Location: 
controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748
 Content-Type: application/json Openstack-Api-Version: compute 2.60 
X-Openstack-Nova-Api-Version: 2.60 Vary: OpenStack-API-Version, 
X-OpenStack-Nova-API-Version X-Openstack-Request-Id: 
req-07b4cdd4-2a22-4791-a6b1-7ea4b7628370 X-Compute-Request-Id: 
req-07b4cdd4-2a22-4791-a6b1-7ea4b7628370 Date: Thu, 12 Jul 2018 06:34:49 GMT
  RESP BODY: {"server": {"security_groups": [{"name": "default"}], 
"OS-DCF:diskConfig": "MANUAL", "id": "c095438d-7a8a-4b00-a53d-882a1f988748", 
"links": [{"href": 
"controller:8774/v2.1/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748",
 "rel": "self"}, {"href": 
"controller:8774/7fcede66893a4d179f3d37958a06d801/servers/c095438d-7a8a-4b00-a53d-882a1f988748",
 "rel": "bookmark"}], "adminPass": "2oHes4hprNs8"}}

  DEBUG (session:722) POST call to compute for 
controller:8774/v2.1/7fcede66893a4d179f3d37958a06d8

[Yahoo-eng-team] [Bug 1781354] [NEW] VPNaaS: IPsec siteconnection status DOWN while using IKE v2

2018-07-12 Thread Dongcan Ye
Public bug reported:

While using IKE policy with version v2, the IPsec siteconnection status
always down, but the network traffic is OK.

>From the ipsec status we can see that the ipsec connection is
established:

# ip netns exec snat-a4d93552-c534-4a2c-96f7-c9b0ea918ba7 ipsec whack --ctlbase 
/var/lib/neutron/ipsec/a4d93552-c534-4a2c-96f7-c9b0ea918ba7/var/run/pluto 
--status
000 Total IPsec connections: loaded 3, active 1
000
000 State Information: DDoS cookies not required, Accepting new IKE connections
000 IKE SAs: total(1), half-open(0), open(0), authenticated(1), anonymous(0)
000 IPsec SAs: total(1), authenticated(1), anonymous(0)
000
000 #2: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1":500 STATE_PARENT_I3 (PARENT 
SA established); EVENT_SA_REPLACE in 2364s; newest IPSEC; eroute owner; 
isakmp#1; idle; import:admin initiate
000 #2: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1" esp.2d6840c8@172.16.2.130 
esp.5d0c4043@172.16.2.123 tun.0@172.16.2.130 tun.0@172.16.2.123 ref=0 
refhim=4294901761 Traffic: ESPin=0B ESPout=0B! ESPmax=0B
000 #1: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1":500 STATE_PARENT_I3 (PARENT 
SA established); EVENT_SA_REPLACE in 2574s; newest ISAKMP; isakmp#0; idle; 
import:admin initiate
000 #1: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1" ref=0 refhim=0 Traffic:
000
000 Bare Shunt list:
000

I think we should match "PARENT SA" in IKE v2. [1]

[1] https://libreswan.org/wiki/How_to_read_status_output

** Affects: neutron
 Importance: Undecided
 Assignee: Dongcan Ye (hellochosen)
 Status: New


** Tags: vpnaas

** Changed in: neutron
 Assignee: (unassigned) => Dongcan Ye (hellochosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1781354

Title:
  VPNaaS: IPsec siteconnection status DOWN while using IKE v2

Status in neutron:
  New

Bug description:
  While using IKE policy with version v2, the IPsec siteconnection
  status always down, but the network traffic is OK.

  From the ipsec status we can see that the ipsec connection is
  established:

  # ip netns exec snat-a4d93552-c534-4a2c-96f7-c9b0ea918ba7 ipsec whack 
--ctlbase 
/var/lib/neutron/ipsec/a4d93552-c534-4a2c-96f7-c9b0ea918ba7/var/run/pluto 
--status
  000 Total IPsec connections: loaded 3, active 1
  000
  000 State Information: DDoS cookies not required, Accepting new IKE 
connections
  000 IKE SAs: total(1), half-open(0), open(0), authenticated(1), anonymous(0)
  000 IPsec SAs: total(1), authenticated(1), anonymous(0)
  000
  000 #2: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1":500 STATE_PARENT_I3 
(PARENT SA established); EVENT_SA_REPLACE in 2364s; newest IPSEC; eroute owner; 
isakmp#1; idle; import:admin initiate
  000 #2: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1" esp.2d6840c8@172.16.2.130 
esp.5d0c4043@172.16.2.123 tun.0@172.16.2.130 tun.0@172.16.2.123 ref=0 
refhim=4294901761 Traffic: ESPin=0B ESPout=0B! ESPmax=0B
  000 #1: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1":500 STATE_PARENT_I3 
(PARENT SA established); EVENT_SA_REPLACE in 2574s; newest ISAKMP; isakmp#0; 
idle; import:admin initiate
  000 #1: "b42f6ee6-acf3-4d2d-beb9-f115d68fef55/0x1" ref=0 refhim=0 Traffic:
  000
  000 Bare Shunt list:
  000

  I think we should match "PARENT SA" in IKE v2. [1]

  [1] https://libreswan.org/wiki/How_to_read_status_output

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1781354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp