[Yahoo-eng-team] [Bug 1891333] Re: neutron designate DNS dns_domain assignment issue

2020-10-12 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1891333

Title:
  neutron designate DNS  dns_domain assignment issue

Status in neutron:
  Expired

Bug description:
  I have deployed openstack using openstack-ansible (ussuri on centos8)
  I have integrated designate with neutron and trying to verify my setup
  so i did following.

  
  # Mapping network with dns_domain = foo.com.
  openstack network set e7b11bae-e7fa-42c8-9739-862b60d5acce --dns-domain 
foo.com.

  # Creating port to verify dns_domain assignment but as you can see in
  output its using example.com. which is configured in
  /etc/neutron/neutron.conf file. (question how do i assign dns_domain
  for each of my network?)

  [root@aio1-utility-container-2f1b7f5e ~]# neutron port-create 
e7b11bae-e7fa-42c8-9739-862b60d5acce --dns_name my-port-bar
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new port:
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | 
  |
  | binding:host_id   | 
  |
  | binding:profile   | {}  
  |
  | binding:vif_details   | {}  
  |
  | binding:vif_type  | unbound 
  |
  | binding:vnic_type | normal  
  |
  | created_at| 2020-08-12T13:13:01Z
  |
  | description   | 
  |
  | device_id | 
  |
  | device_owner  | 
  |
  | dns_assignment| {"ip_address": "192.168.74.8", "hostname": 
"my-port-bar", "fqdn": "my-port-bar.example.com."} |
  | dns_name  | my-port-bar 
  |
  | extra_dhcp_opts   | 
  |
  | fixed_ips | {"subnet_id": 
"7896c02c-d625-4477-a547-2a6641fac05b", "ip_address": "192.168.74.8"}   
|
  | id| e2243a87-5880-4b92-ac43-5d16d1b18cee
  |
  | mac_address   | fa:16:3e:e3:00:2a   
  |
  | name  | 
  |
  | network_id| e7b11bae-e7fa-42c8-9739-862b60d5acce
  |
  | port_security_enabled | True
  |
  | project_id| e50d05805d714f63b2583b830170280a
  |
  | revision_number   | 1   
  |
  | security_groups   | ef55cda9-a64d-46eb-9897-acd1f2bd6374
  |
  | status| DOWN
  |
  | tags  | 
  |
  | tenant_id | e50d05805d714f63b2583b830170280a
  |
  | updated_at| 2020-08-12T13:13:01Z
  |
  

[Yahoo-eng-team] [Bug 1899541] [NEW] archive_deleted_rows archives pci_devices records as residue because of 'instance_uuid'

2020-10-12 Thread melanie witt
Public bug reported:

This is based on a bug reported downstream [1] where after a random
amount of time, update_available_resource began to fail with the
following trace on nodes with PCI devices:

  "traceback": [
"Traceback (most recent call last):",
"  File \"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 
7447, in update_available_resource_for_node",
"rt.update_available_resource(context, nodename)",
"  File 
\"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py\", line 
706, in update_available_resource",
"self._update_available_resource(context, resources)",
"  File \"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py\", 
line 274, in inner",
"return f(*args, **kwargs)",
"  File 
\"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py\", line 
782, in _update_available_resource",
"self._update(context, cn)",
"  File 
\"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py\", line 
926, in _update",
"self.pci_tracker.save(context)",
"  File \"/usr/lib/python2.7/site-packages/nova/pci/manager.py\", line 92, 
in save",
"dev.save()",
"  File \"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py\", 
line 210, in wrapper",
"ctxt, self, fn.__name__, args, kwargs)",
"  File \"/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py\", line 
245, in object_action",
"objmethod=objmethod, args=args, kwargs=kwargs)",
"  File \"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py\", 
line 174, in call",
"retry=self.retry)",
"  File \"/usr/lib/python2.7/site-packages/oslo_messaging/transport.py\", 
line 131, in _send",
"timeout=timeout, retry=retry)",
"  File 
\"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", 
line 559, in send",
"retry=retry)",
"  File 
\"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py\", 
line 550, in _send",
"raise result",
"RemoteError: Remote error: DBError (pymysql.err.IntegrityError) (1048, 
u\"Column 'compute_node_id' cannot be null\") [SQL: u'INSERT INTO pci_devices 
(created_at, updated_at, deleted_at, deleted, uuid, compute_node_id, address, 
vendor_id, product_id, dev_type, dev_id, label, status, request_id, extra_info, 
instance_uuid, numa_node, parent_addr) VALUES (%(created_at)s, %(updated_at)s, 
%(deleted_at)s, %(deleted)s, %(uuid)s, %(compute_node_id)s, %(address)s, 
%(vendor_id)s, %(product_id)s, %(dev_type)s, %(dev_id)s, %(label)s, %(status)s, 
%(request_id)s, %(extra_info)s, %(instance_uuid)s, %(numa_node)s, 
%(parent_addr)s)'] [parameters: {'status': u'available', 'instance_uuid': None, 
'dev_type': None, 'uuid': None, 'dev_id': None, 'parent_addr': None, 
'numa_node': None, 'created_at': datetime.datetime(2020, 8, 7, 11, 51, 19, 
643044), 'vendor_id': None, 'updated_at': None, 'label': None, 'deleted': 0, 
'extra_info': '{}', 'compute_node_id': None, 'request_id': None, 'deleted_at': 
None, 'address': None, 'product_id': None}] (Background on this error at: 
http://sqlalche.me/e/gkpj)",


Here ^ we see an attempt to insert a nearly empty (NULL fields) record
into the pci_devices table. Inspection of the code shows that the way
this can occur is if we fail to lookup the pci_devices record we want
and then we try to create a new one [2]:


@pick_context_manager_writer
def pci_device_update(context, node_id, address, values):
query = model_query(context, models.PciDevice, read_deleted="no").\
filter_by(compute_node_id=node_id).\
filter_by(address=address)
if query.update(values) == 0:
device = models.PciDevice()
device.update(values)
context.session.add(device)
return query.one()


Turns out what was happening was when a request came in to delete an
instance that had allocated a PCI device, if the archive_deleted_rows
cron job fired at just the right (wrong) moment, it would sweep away the
pci_devices record matching the instance_uuid because archive is
treating any table with an 'instance_uuid' column as instance "residue"
needing cleanup.

So after the pci_devices record was swept away, we tried to update the
resource tracker as part of the _complete_deletion method in the compute
manager and that failed because we could not locate the pci_devices
record to free the PCI device (null out the instance_uuid field).

What we need to do here is not to treat the pci_devices table records as
instance residue. The records in pci_devices are not tied to instance
lifecycles at all and they are managed independently by the PCI
trackers.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1867124
[2] 
https://github.com/openstack/nova/blob/261de76104ca67bed3ea6cdbcaaab0e44030f1e2/nova/db/sqlalchemy/api.py#L4406-L4409

** Affects: nova
 Importance: Medium
 Assignee: melanie witt (melwitt)
 Status: In Progress


** 

[Yahoo-eng-team] [Bug 1899229] Re: Nova compute log can get the password info from the user_data

2020-10-12 Thread Jeremy Stanley
Also, the duplicate (bug 1899228) is still public anyway.

** Information type changed from Private Security to Public

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1899229

Title:
  Nova compute log can get the password info from the user_data

Status in OpenStack Compute (nova):
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Here is the log on /var/log/nova/nova-compute.log, we can see
  
user_data='I2Nsb3VkLWNvbmZpZwpjaHBhc3N3ZDoKICBsaXN0OiB8CiAgICByb290OjEyMzQ1Njc4CiAgZXhwaXJlOiBGYWxzZQ==',
  if you use python to do base64 decode, it will translate to '#cloud-
  config\nchpasswd:\n list: |\n root:12345678\n expire: False', we can
  see the root password is 12345678. Here is the method:

  
>>>base64.b64decode("I2Nsb3VkLWNvbmZpZwpjaHBhc3N3ZDoKICBsaXN0OiB8CiAgICByb290OjEyMzQ1Njc4CiAgZXhwaXJlOiBGYWxzZQ==")
  b'#cloud-config\nchpasswd:\n  list: |\nroot:12345678\n  expire: False'

  Although the password is been encrypted but it is easy to decrypted.

  So, in order to avoid this, maybe we don't need to display the
  password info?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1899229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1896920] Re: Unnecessary error log when checking if a device is ready

2020-10-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/754005
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b1f1c8aa50012d09b7a81abfb99d02e0c457fc8c
Submitter: Zuul
Branch:master

commit b1f1c8aa50012d09b7a81abfb99d02e0c457fc8c
Author: Rodolfo Alonso Hernandez 
Date:   Wed Sep 23 16:35:45 2020 +

Reduce log level in "ensure_device_is_ready"

If the device is not ready, the method should inform about this
event. The code calling this method, if needed, can write a higher
log message.

Change-Id: Ib7c5ba736f6e4ccc88df665faeef304c176a24e7
Closes-Bug: #1896920


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1896920

Title:
  Unnecessary error log when checking if a device is ready

Status in neutron:
  Fix Released

Bug description:
  In method "ensure_device_is_ready" [1], if the device does not exist
  or the MAC is still not assigned, the method returns False and also
  logs an error. This error log is distracting; instead of this, an info
  message could be logged.

  The code using this method, that returns True or False depending on
  the state of the interface, can decide to log a higher log message.

  
[1]https://github.com/openstack/neutron/blob/856cae4cf8e33c05b308d880df78b7be02ae90ad/neutron/agent/linux/ip_lib.py#L955

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1896920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1899037] Re: ML2OVN migration script does not set proper bridge mappings

2020-10-12 Thread Nate Johnston
** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1899037

Title:
  ML2OVN migration script does not set proper bridge mappings

Status in neutron:
  Invalid

Bug description:
  ML2OVS -> ML2OVN migration on non-DVR environment (3 controllers + 2 
computes) is failing with default (empty) bridge mapping settings on compute 
nodes.
  File 
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml
 contains the following:
ComputeParameters:
  NeutronBridgeMappings: ""

  This causes that after the overcloud update that is performed during
  the migration procedure all existing VMs are not accessible.

  Workaround is to set in 
/usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml
 
ComputeParameters:
  NeutronBridgeMappings: "tenant:br-isolated"
  before staring the migration.

  The issue does not happen when DVR is enabled.

  In case of SR-IOV or any other no-DVR case when compute nodes are
  connected to the external network and need to be able to launch VM
  instances on the external network we need to specify full value of
  configured bridge mappings (not only tenant) before staring the
  migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1899037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1898634] Re: BGP peer is not working

2020-10-12 Thread Nate Johnston
** Changed in: neutron
   Status: Fix Released => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1898634

Title:
  BGP peer is not working

Status in neutron:
  In Progress

Bug description:
  I´m trying to configure dynamic routing, but when I associate provider
  network with the bgp speaker I start to receive these errors:

  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server Traceback 
(most recent call last):
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in 
_process_incoming
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 276, 
in dispatch
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 196, 
in _do_dispatch
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/api/rpc/handlers/bgp_speaker_rpc.py",
 line 65, in get_bgp_speakers
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server return 
self.plugin.get_bgp_speakers_for_agent_host(context, host)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_dragentscheduler_db.py",
 line 263, in get_bgp_speakers_for_agent_host
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server context, 
binding['bgp_speaker_id'])
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 165, in get_bgp_speaker_with_advertised_routes
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
bgp_speaker_id)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 479, in get_routes_by_bgp_speaker_id
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
bgp_speaker_id)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/local/lib/python3.6/site-packages/neutron_dynamic_routing/db/bgp_db.py", 
line 673, in _get_central_fip_host_routes_by_bgp_speaker
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
l3_db.Router.id == router_attrs.router_id)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2259, in 
outerjoin
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
from_joinpoint=from_joinpoint,
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"", line 2, in _join
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/base.py", line 220, in 
generate
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server fn(self, 
*args[1:], **kw)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2414, in 
_join
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server left, 
right, onclause, prop, create_aliases, outerjoin, full
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2437, in 
_join_left_to_right
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server ) = 
self._join_determine_implicit_left_side(left, right, onclause)
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 2526, in 
_join_determine_implicit_left_side
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server "Can't 
determine which FROM clause to join "
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server 
sqlalchemy.exc.InvalidRequestError: Can't determine which FROM clause to join 
from, there are multiple FROMS which can join to this entity. Try adding an 
explicit ON clause to help resolve the ambiguity.
  2020-10-05 16:56:13.028 2304845 ERROR oslo_messaging.rpc.server


  
  I made manual installation, ussuri. Couldn´t find any workaround.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1899487] Re: cloud-init hard codes MTU configuration at initial deploy time

2020-10-12 Thread Ryan Harper
" # curl http://169.254.169.254/openstack/2018-08-27/network_data.json
{"links": [{"id": "tapa035fb68-01", "vif_id": 
"a035fb68-010c-42e3-8da7-ea3c36a0d607", "type": "ovs", "mtu": 8942, 
"ethernet_mac_address": "fa:16:3e:31:26:f7"}], "networks": [{"id": "network0", 
"type": "ipv4_dhcp", "link": "tapa035fb68-01", "network_id": 
"b4ef84c0-1235-48a8-aaf7-03fab7ef5367"}], "services": []}"

How is cloud-init to know from this network-config.json that DHCP will
provide an MTU value?  How does it know that it should ignore the
provided MTU?  If DHCP is providing MTU, should network-config.json then
not provide the MTU value?


** Also affects: netplan.io (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Incomplete

** Changed in: netplan.io (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1899487

Title:
  cloud-init hard codes MTU configuration at initial deploy time

Status in cloud-init:
  Incomplete
Status in netplan.io package in Ubuntu:
  Incomplete

Bug description:
  When using OpenStack cloud provider `cloud-init` will write out
  /etc/netplan/50-cloud-init.yaml at initial instance boot and not
  update it on subsequent boots of the instance.

  The OpenStack metadata service provides information about MTU for the
  network [0] and cloud-init takes this value and writes it into the
  netplan configuration [1].

  A side effect of configuring the MTU through netplan is that the
  `systemd-networkd` [Link] section [2] gets the MTUBytes value filled
  and this in turn makes `systemd-networkd` ignore the MTU value
  provided by DHCP [3][4].

  During the lifetime of a cloud events occur that will force a operator
  to reduce the MTU available to instances attached to its overlay
  networks. This may happen because of software imposed change of tunnel
  type (GRE -> VXLAN, VXLAN -> GENEVE) or change of topology or
  encapsulation in the physical network equipment.

  To maximize performance these clouds have configured their instances
  to use the maximum available MTU without leaving any headroom to
  account for such changes and the only way to move forward is to reduce
  the available MTU on the instances. We are facing a concrete challenge
  with this now where we have users wanting to migrate from VXLAN
  tunnels to GENEVE tunnels with 38 byte header size.

  0: # curl http://169.254.169.254/openstack/2018-08-27/network_data.json
  {"links": [{"id": "tapa035fb68-01", "vif_id": 
"a035fb68-010c-42e3-8da7-ea3c36a0d607", "type": "ovs", "mtu": 8942, 
"ethernet_mac_address": "fa:16:3e:31:26:f7"}], "networks": [{"id": "network0", 
"type": "ipv4_dhcp", "link": "tapa035fb68-01", "network_id": 
"b4ef84c0-1235-48a8-aaf7-03fab7ef5367"}], "services": []}

  1: # cat /etc/netplan/50-cloud-init.yaml 
  # This file is generated from information provided by the datasource.  Changes
  # to it will not persist across an instance reboot.  To disable cloud-init's
  # network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}
  network:
  version: 2
  ethernets:
  ens2:
  dhcp4: true
  match:
  macaddress: fa:16:3e:31:26:f7
  mtu: 8950
  set-name: ens2

  2: # cat /run/systemd/network/10-netplan-ens2.link 
  [Match]
  MACAddress=fa:16:3e:31:26:f7

  [Link]
  Name=ens2
  WakeOnLan=off
  MTUBytes=8950

  3: # cat /run/systemd/network/10-netplan-ens2.network 
  [Match]
  MACAddress=fa:16:3e:31:26:f7
  Name=ens2

  [Link]
  MTUBytes=8950

  [Network]
  DHCP=ipv4
  LinkLocalAddressing=ipv6

  [DHCP]
  RouteMetric=100
  UseMTU=true

  4: Oct 12 13:30:18 canary-3 systemd-networkd[24084]:
  /run/systemd/network/10-netplan-ens2.network: MTUBytes= in [Link]
  section and UseMTU= in [DHCP] section are set. Disabling UseMTU=.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1899487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1899502] [NEW] router_centralized_snat ports do not have project_id

2020-10-12 Thread Arnaud Morin
Public bug reported:

When adding a subnet to a distributed router, two interfaces are created:
- one for gateway (network:router_interface_distributed)
- one for snat / instance communication (network:router_centralized_snat)

The gw one is having a project_id:

$ openstack port show 4348275a-64bd-439f-be5c-9b3cf1a9160d
...
| project_id  | 0819ce415874459dbd0d312cc15badee
...

While the snat one does not:

$ openstack port show 3258ebd9-2be4-4cf9-a110-c619906708ec
...
project_id  | 
...

Note that, both of them are visible by client:
$ openstack port list -c id -c device_owner
+--+--+--+
| ID   | device_owner | 
project_id   |
+--+--+--+
| 2f972668-e47c-41c1-90a8-17592f69ff3f | compute:nova | 
0819ce415874459dbd0d312cc15badee |
| 3258ebd9-2be4-4cf9-a110-c619906708ec | network:router_centralized_snat  | 
 |
| 4348275a-64bd-439f-be5c-9b3cf1a9160d | network:router_interface_distributed | 
0819ce415874459dbd0d312cc15badee |
| 5ccb4325-b687-4d8c-82d3-ebe5e9f163d0 | network:dhcp | 
0819ce415874459dbd0d312cc15badee |
| 93ae1897-7af2-49f3-bfb0-af6050e75ea4 | network:floatingip   | 
0819ce415874459dbd0d312cc15badee |
+--+--+--+


Code in charge of creating the GW port:
https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L761-L799

Code in charge of creating the SNAT port:
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvr_db.py#L264-L278

** Affects: neutron
 Importance: Undecided
 Assignee: Arnaud Morin (arnaud-morin)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1899502

Title:
  router_centralized_snat ports do not have project_id

Status in neutron:
  New

Bug description:
  When adding a subnet to a distributed router, two interfaces are created:
  - one for gateway (network:router_interface_distributed)
  - one for snat / instance communication (network:router_centralized_snat)

  The gw one is having a project_id:

  $ openstack port show 4348275a-64bd-439f-be5c-9b3cf1a9160d
  ...
  | project_id  | 0819ce415874459dbd0d312cc15badee
  ...

  While the snat one does not:

  $ openstack port show 3258ebd9-2be4-4cf9-a110-c619906708ec
  ...
  project_id  | 
  ...

  Note that, both of them are visible by client:
  $ openstack port list -c id -c device_owner
  
+--+--+--+
  | ID   | device_owner 
| project_id   |
  
+--+--+--+
  | 2f972668-e47c-41c1-90a8-17592f69ff3f | compute:nova 
| 0819ce415874459dbd0d312cc15badee |
  | 3258ebd9-2be4-4cf9-a110-c619906708ec | network:router_centralized_snat  
|  |
  | 4348275a-64bd-439f-be5c-9b3cf1a9160d | network:router_interface_distributed 
| 0819ce415874459dbd0d312cc15badee |
  | 5ccb4325-b687-4d8c-82d3-ebe5e9f163d0 | network:dhcp 
| 0819ce415874459dbd0d312cc15badee |
  | 93ae1897-7af2-49f3-bfb0-af6050e75ea4 | network:floatingip   
| 0819ce415874459dbd0d312cc15badee |
  
+--+--+--+


  Code in charge of creating the GW port:
  https://github.com/openstack/neutron/blob/master/neutron/db/l3_db.py#L761-L799

  Code in charge of creating the SNAT port:
  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvr_db.py#L264-L278

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1899502/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1898997] Re: MAAS cannot deploy/boot if OVS bridge is configured on a single PXE NIC

2020-10-12 Thread Lukas Märdian
The SSH failure seems to be related to cloud-init not detecting the OVS bridge 
+ slaves correctly. Therefore, the cloud-init 'init' stage fails with an 
exception:
"RuntimeError: Not all expected physical devices present: {'52:54:00:d9:08:1c'}"

I'm working on a pull request here:
https://github.com/canonical/cloud-init/pull/608

In combination with the netplan PR, this should solve the issue
described here.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1898997

Title:
  MAAS cannot deploy/boot if OVS bridge is configured on a single PXE
  NIC

Status in cloud-init:
  In Progress
Status in netplan:
  In Progress

Bug description:
  Problem description:
  If we try to deploy a single-NIC machine via MAAS, configuring an Open 
vSwitch bridge as the primary/PXE interface, the machine will install and boot 
Ubuntu 20.04 but it cannot finish the whole configuration (e.g. copying of SSH 
keys) and cannot be accessed/controlled via MAAS. It ends up in a "Failed" 
state.

  This is because systemd-network-wait-online.service fails (for some
  reason), before netplan can fully setup and configure the OVS bridge.
  Because of broken networking cloud-init cannot complete its final
  stages, like setup of SSH keys or signaling its state back to MAAS. If
  we wait a little longer the OVS bridge will actually come online and
  networking is working – SSH not being setup and MAAS state still
  "Failed", though.

  Steps to reproduce:
  * Setup a (virtual) MAAS system, e.g. inside a LXD container using a KVM 
host, as described here:
  
https://discourse.maas.io/t/setting-up-a-flexible-virtual-maas-test-environment/142
  * Install & setup maas[-cli] snap from 2.9/beta channel (instead of the 
deb/PPA from the discourse post)
  * Configure netplan PPA+key for testing via "Settings" -> "Package repos":
  https://launchpad.net/~slyon/+archive/ubuntu/ovs
  * Prepare curtin preseed in /var/snap/maas/current/preseeds/curtin_userdata, 
inside the LXD container (so you can access the broken machine afterwards):
  ==
  #cloud-config
  debconf_selections:
   maas: |
    {{for line in str(curtin_preseed).splitlines()}}
    {{line}}
    {{endfor}}
  late_commands:
    maas: [wget, '--no-proxy', '{{node_disable_pxe_url}}', '--post-data', 
'{{node_disable_pxe_data}}', '-O', '/dev/null']
    90_create_user: ["curtin", "in-target", "--", "sh", "-c", "sudo useradd 
test -g 0 -G sudo"]
    92_set_user_password: ["curtin", "in-target", "--", "sh", "-c", "echo 
'test:test' | sudo chpasswd"]
    94_cat: ["curtin", "in-target", "--", "sh", "-c", "cat /etc/passwd"]
  ==
  * Compose a new virtual machine via MAAS' "KVM" menu, named e.g. "test1"
  * Watch it being commissioned via MAAS' "Machines" menu
  * Once it's ready select your machine (e.g. "test1.maas") -> Network
  * Select the single network interface (e.g. "ens4") -> Create bridge
  * Choose "Bridge type: Open vSwitch (ovs)", Select "Subnet" and "IP mode", 
save.
  * Deploy machine to Ubuntu 20.04 via "Take action" button

  The machine will install the OS and boot, but will end up in a
  "Failed" state inside MAAS due to network/OVS not being setup
  correctly. MAAS/SSH has no control over it. You can access the
  (broken) machine via serial console from the KVM-host (i.e. LXD
  container) via "virsh console test1" using the "test:test"
  credentials.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1898997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1894127] Re: SPEC HAS NO EXPECTATIONS warnings in jasmine tests

2020-10-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/753432
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=533892f080ebdf29173d11db4d12f92809d39b36
Submitter: Zuul
Branch:master

commit 533892f080ebdf29173d11db4d12f92809d39b36
Author: Tatiana Ovchinnikova 
Date:   Tue Sep 22 14:11:12 2020 -0500

Make text download and load groups tests work

Currently "then" callback functions for these tests aren't called since
digest cycles were never triggered. Jasmine Spec Runner marks them 'passed'
only adding "SPEC HAS NO EXPECTATIONS" into their names.
This patch triggers a digest by calling a scope's $apply functions in a
correct place, deals with timeout properly and makes the tests work.

Closes-Bug: #1894127

Change-Id: I00acc4b13fa0cc05b8c6ccd2024084527562f001


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1894127

Title:
  SPEC HAS NO EXPECTATIONS warnings in jasmine tests

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  There are 23 tests which Jasmine marks with the warning "SPEC HAS NO
  EXPECTATIONS":

  permissions service
  checkAll
  with extended permissions
  SPEC HAS NO EXPECTATIONS with promise array, adds checks for 
permissions
  SPEC HAS NO EXPECTATIONS with promise, adds checks for permissions
  SPEC HAS NO EXPECTATIONS with no promise, adds checks for 
permissions
  SPEC HAS NO EXPECTATIONS without extended permissions it returns no 
promises 
  ...
  textDownloadService
  SPEC HAS NO EXPECTATIONS should return promise and it resolve filename 
after starting download file
  ...
  horizon.framework.util.filters
  simpleDate
  SPEC HAS NO EXPECTATIONS returns blank if nothing
  SPEC HAS NO EXPECTATIONS returns the expected time

  mediumDate
  SPEC HAS NO EXPECTATIONS returns blank if nothing
  SPEC HAS NO EXPECTATIONS returns the expected time
  ...
  horizon.framework.util.navigations.service
  setBreadcrumb
  SPEC HAS NO EXPECTATIONS sets breadcrumb items from specified array
  ...
  Launch Instance Model
  launchInstanceModel Factory
  Post Initialize Model
  SPEC HAS NO EXPECTATIONS getPorts at launch should not return 
child port
  ...
  horizon.dashboard.identity.domains.actions.delete.service
  perform method and pass only
  SPEC HAS NO EXPECTATIONS should open the delete modal
  SPEC HAS NO EXPECTATIONS should pass and fail in a function that 
delete domain by item action
  SPEC HAS NO EXPECTATIONS should pass and fail in a function that 
delete domain by batch action
  ...
  horizon.dashboard.identity.groups.actions.delete.service
  perform method and pass only
  SPEC HAS NO EXPECTATIONS should open the delete modal
  SPEC HAS NO EXPECTATIONS should pass and fail in a function that 
delete group by item action
  SPEC HAS NO EXPECTATIONS should pass and fail in a function that 
delete group by batch action
  ...
  horizon.dashboard.identity.groups
  SPEC HAS NO EXPECTATIONS should load groups
  ...
  horizon.dashboard.identity.users.actions.delete.service
  perform method and pass only
  SPEC HAS NO EXPECTATIONS should open the delete modal
  SPEC HAS NO EXPECTATIONS should pass and fail in a function that 
delete user by item action
  SPEC HAS NO EXPECTATIONS should pass and fail in a function that 
delete user by batch action
  ...
  horizon.framework.util.timezones.service
  get timezone offset
  SPEC HAS NO EXPECTATIONS returns +(UTC offset) if nothing
  SPEC HAS NO EXPECTATIONS returns the timezone offset

  
  That means in general that these tests are doing nothing since they are for 
the async callback functions and it's crucial to expect outputs after resolving 
promises.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1894127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1899487] [NEW] cloud-init hard codes MTU configuration at initial deploy time

2020-10-12 Thread Frode Nordahl
Public bug reported:

When using OpenStack cloud provider `cloud-init` will write out
/etc/netplan/50-cloud-init.yaml at initial instance boot and not update
it on subsequent boots of the instance.

The OpenStack metadata service provides information about MTU for the
network [0] and cloud-init takes this value and writes it into the
netplan configuration [1].

A side effect of configuring the MTU through netplan is that the
`systemd-networkd` [Link] section [2] gets the MTUBytes value filled and
this in turn makes `systemd-networkd` ignore the MTU value provided by
DHCP [3][4].

During the lifetime of a cloud events occur that will force a operator
to reduce the MTU available to instances attached to its overlay
networks. This may happen because of software imposed change of tunnel
type (GRE -> VXLAN, VXLAN -> GENEVE) or change of topology or
encapsulation in the physical network equipment.

To maximize performance these clouds have configured their instances to
use the maximum available MTU without leaving any headroom to account
for such changes and the only way to move forward is to reduce the
available MTU on the instances. We are facing a concrete challenge with
this now where we have users wanting to migrate from VXLAN tunnels to
GENEVE tunnels with 38 byte header size.

0: # curl http://169.254.169.254/openstack/2018-08-27/network_data.json
{"links": [{"id": "tapa035fb68-01", "vif_id": 
"a035fb68-010c-42e3-8da7-ea3c36a0d607", "type": "ovs", "mtu": 8942, 
"ethernet_mac_address": "fa:16:3e:31:26:f7"}], "networks": [{"id": "network0", 
"type": "ipv4_dhcp", "link": "tapa035fb68-01", "network_id": 
"b4ef84c0-1235-48a8-aaf7-03fab7ef5367"}], "services": []}

1: # cat /etc/netplan/50-cloud-init.yaml 
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
ens2:
dhcp4: true
match:
macaddress: fa:16:3e:31:26:f7
mtu: 8950
set-name: ens2

2: # cat /run/systemd/network/10-netplan-ens2.link 
[Match]
MACAddress=fa:16:3e:31:26:f7

[Link]
Name=ens2
WakeOnLan=off
MTUBytes=8950

3: # cat /run/systemd/network/10-netplan-ens2.network 
[Match]
MACAddress=fa:16:3e:31:26:f7
Name=ens2

[Link]
MTUBytes=8950

[Network]
DHCP=ipv4
LinkLocalAddressing=ipv6

[DHCP]
RouteMetric=100
UseMTU=true

4: Oct 12 13:30:18 canary-3 systemd-networkd[24084]:
/run/systemd/network/10-netplan-ens2.network: MTUBytes= in [Link]
section and UseMTU= in [DHCP] section are set. Disabling UseMTU=.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "cloud-init.tar.gz"
   
https://bugs.launchpad.net/bugs/1899487/+attachment/5421289/+files/cloud-init.tar.gz

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1899487

Title:
  cloud-init hard codes MTU configuration at initial deploy time

Status in cloud-init:
  New

Bug description:
  When using OpenStack cloud provider `cloud-init` will write out
  /etc/netplan/50-cloud-init.yaml at initial instance boot and not
  update it on subsequent boots of the instance.

  The OpenStack metadata service provides information about MTU for the
  network [0] and cloud-init takes this value and writes it into the
  netplan configuration [1].

  A side effect of configuring the MTU through netplan is that the
  `systemd-networkd` [Link] section [2] gets the MTUBytes value filled
  and this in turn makes `systemd-networkd` ignore the MTU value
  provided by DHCP [3][4].

  During the lifetime of a cloud events occur that will force a operator
  to reduce the MTU available to instances attached to its overlay
  networks. This may happen because of software imposed change of tunnel
  type (GRE -> VXLAN, VXLAN -> GENEVE) or change of topology or
  encapsulation in the physical network equipment.

  To maximize performance these clouds have configured their instances
  to use the maximum available MTU without leaving any headroom to
  account for such changes and the only way to move forward is to reduce
  the available MTU on the instances. We are facing a concrete challenge
  with this now where we have users wanting to migrate from VXLAN
  tunnels to GENEVE tunnels with 38 byte header size.

  0: # curl http://169.254.169.254/openstack/2018-08-27/network_data.json
  {"links": [{"id": "tapa035fb68-01", "vif_id": 
"a035fb68-010c-42e3-8da7-ea3c36a0d607", "type": "ovs", "mtu": 8942, 
"ethernet_mac_address": "fa:16:3e:31:26:f7"}], "networks": [{"id": "network0", 
"type": "ipv4_dhcp", "link": "tapa035fb68-01", "network_id": 
"b4ef84c0-1235-48a8-aaf7-03fab7ef5367"}], "services": []}

  1: # cat 

[Yahoo-eng-team] [Bug 1899470] [NEW] List users is too slow when existing amount of ldap users

2020-10-12 Thread jun923.gu
Public bug reported:

We encounter the situation that query users is too slow when existing amount of 
LDAP users. We create a domain and config LDAP into the domain, there are 2 
users data in the LDAP server. When we query users in the domain, we spend 
about 20 seconds to return all users in the domain.
[root@vm /]# time openstack user list --domain 
f50ccdfe4263414299271bdd61b42e65|wc -l
20004

real0m20.445s
user0m6.018s
sys 0m0.575s
We refer the following link: https://bugs.launchpad.net/keystone/+bug/1582585 
and found our's code have including the patches. So we think it's necessary to 
optimize the performance of querying users again.

** Affects: keystone
 Importance: Undecided
 Status: New

** Description changed:

  We encounter the situation that query users is too slow when existing amount 
of LDAP users. We create a domain and config LDAP into the domain, there are 
2 users data in the LDAP server. When we query users in the domain, we 
spend about 20 seconds to return all users in the domain.
- ()[root@vm /]# time openstack user list --domain 
f50ccdfe4263414299271bdd61b42e65|wc -l
+ [root@vm /]# time openstack user list --domain 
f50ccdfe4263414299271bdd61b42e65|wc -l
  20004
  
  real  0m20.445s
  user  0m6.018s
  sys   0m0.575s
  We refer the following link: https://bugs.launchpad.net/keystone/+bug/1582585 
and found our's code have including the patches. So we think it's necessary to 
optimize the performance of querying users again.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1899470

Title:
  List users is too slow when existing amount of ldap users

Status in OpenStack Identity (keystone):
  New

Bug description:
  We encounter the situation that query users is too slow when existing amount 
of LDAP users. We create a domain and config LDAP into the domain, there are 
2 users data in the LDAP server. When we query users in the domain, we 
spend about 20 seconds to return all users in the domain.
  [root@vm /]# time openstack user list --domain 
f50ccdfe4263414299271bdd61b42e65|wc -l
  20004

  real  0m20.445s
  user  0m6.018s
  sys   0m0.575s
  We refer the following link: https://bugs.launchpad.net/keystone/+bug/1582585 
and found our's code have including the patches. So we think it's necessary to 
optimize the performance of querying users again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1899470/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1896678] Re: [OVN Octavia Provider] test_port_forwarding failing in gate

2020-10-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/755111
Committed: 
https://git.openstack.org/cgit/openstack/ovn-octavia-provider/commit/?id=b68d2a78a4b34497e376a7be545a20a23d036191
Submitter: Zuul
Branch:master

commit b68d2a78a4b34497e376a7be545a20a23d036191
Author: Flavio Fernandes 
Date:   Tue Sep 29 15:12:30 2020 -0400

Fix and enable test_port_forwarding

This test began failing because of inconsistencies in the
in-memory neutron db that occur since the change
https://review.opendev.org/#/c/750295/ got merged.

The failure observed was that an entry in the port forwarding
table was created but never got added as a row. When the get
operation for the entry created happened, it raised a not found
exception. To avoid this situation the test will now use
OS_TEST_DBAPI_ADMIN_CONNECTION=sqlite:///sqlite.db

This reverts commit 481f0b3b3c6014c23b9619b6c361754d0185dce1.
Related-bug: #1894117
Closes-bug: #1896678
Co-Authored-By: Jakub Libosvar 
Change-Id: Ic753232ee576fb8b663af5c4a68635bb40a40edc


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1896678

Title:
  [OVN Octavia Provider] test_port_forwarding failing in gate

Status in neutron:
  Fix Released

Bug description:
  
ovn_octavia_provider.tests.functional.test_integration.TestOvnOctaviaProviderIntegration.test_port_forwarding
  is currently failing in the check and gate queues and is under
  investigation.

  Will mark unstable to get the gate Green.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1896678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1899431] [NEW] virsh domifstat can not get vhostuser vif statistics

2020-10-12 Thread zhanghao
Public bug reported:

How to reproduce this problem:
1.Create a DPDK VM with a network(instance has a vhu01 vif)
2.VM attaches an interface(instance has a vhu02 vif)

virsh domifstat instance-XXX vhu01(success)

virsh domifstat instance-XXX vhu02(failure)
error: Failed to get interface stats instance-XXX vhu02
error: invalid argument: invalid path, 'vhu02' is not a known interface

When a DPDK VM attaches an interface, libvirt can not automatically
generate "" in XML file.

** Affects: nova
 Importance: Undecided
 Assignee: zhanghao (zhanghao2)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1899431

Title:
  virsh domifstat can not get vhostuser vif statistics

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  How to reproduce this problem:
  1.Create a DPDK VM with a network(instance has a vhu01 vif)
  2.VM attaches an interface(instance has a vhu02 vif)

  virsh domifstat instance-XXX vhu01(success)

  virsh domifstat instance-XXX vhu02(failure)
  error: Failed to get interface stats instance-XXX vhu02
  error: invalid argument: invalid path, 'vhu02' is not a known interface

  When a DPDK VM attaches an interface, libvirt can not automatically
  generate "" in XML file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1899431/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp