[Yahoo-eng-team] [Bug 1826259] Re: Create Volume dialog opens (from image panel in Horizon) but getting error default volume type can not be found

2019-07-10 Thread Akihiro Motoki
Based on the feedback from the cinder team, the default volume type is
assumed to be public. Considering this, this is not a horizon bug
apparently. I will remove horizon from the affected projects.

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1826259

Title:
  Create Volume dialog opens (from image panel in Horizon) but getting
  error default volume type can not be found

Status in StarlingX:
  Triaged

Bug description:
  Brief Description
  -
  Create Volume dialog opens (from image panel in Horizon) but getting error 
default volume type can not be found

  Severity
  
  standard

  Steps to Reproduce
  --
  1. As tenant user, navigate to the Images panel
  2. Select "Create Volume" to opent the dialog
  3. Confirm the volume "Type" setting

  Expected Behavior
  --
  Create Volume dialog should open and a default "Type" should be set (so that 
errors are not popping up)

  +Note: Create Volume dialog from the Volumes panel has default Volume
  Type setting "No volume type"

  Actual Behavior
  
  Creating Volume dialog (from image panel) opens without a default Volume Type 
resulting in error
  Error: Unable to retrieve the default volume type.

  
  Reproducibility
  ---
  yes

  System Configuration
  
  any

  Branch/Pull Time/Commit
  ---
  BUILD_ID="20190421T233001Z"

  
  Timestamp/Logs
  --
  "Default volume type can not be found. (HTTP 404) (Request-ID: 
req-e9858d29-31c8-4d37-a05e-54710d029332)"

To manage notifications about this bug go to:
https://bugs.launchpad.net/starlingx/+bug/1826259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836146] [NEW] Notification of deleting servergroup doesn't have project_id or user_id

2019-07-10 Thread Spencer Yu
Public bug reported:

When deleting server group and get the notification, can not find
project id and user id in it.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1836146

Title:
  Notification of deleting servergroup doesn't have project_id or
  user_id

Status in OpenStack Compute (nova):
  New

Bug description:
  When deleting server group and get the notification, can not find
  project id and user id in it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1836146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836141] [NEW] vm resize failed due to the remains left by failed actions

2019-07-10 Thread Spencer Yu
Public bug reported:

Reproduce Steps:
1. Confirm resize failed due to neutron error:
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 382, in 
decorated_function   |
|   |   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in 
__exit__  |
|   | six.reraise(self.type_, self.value, self.tb)  
   |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 370, in 
decorated_function   |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3442, in 
confirm_resize  |
|   |   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 254, in 
inner  |
|   | return f(*args, **kwargs) 
   |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3440, in 
do_confirm_resize   |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3465, in 
_confirm_resize |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/network/base_api.py", line 244, in 
get_instance_nw_info|
|   |   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 949, in 
_get_instance_nw_info  |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1724, in 
_build_network_info_model |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 972, in 
_gather_port_ids_and_networks  |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 290, in 
_get_available_networks|
|   |   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 102, in 
with_params |
|   |   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 577, in 
list_networks   |
|   |   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 307, in 
list|
|   |   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 320, in 
_pagination |
|   |   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 293, in 
get |
|   |   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 270, in 
retry_request   |
|   |   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 211, in 
do_request  |
|   |   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 185, in 
_handle_fault_response  |
|   |   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 83, in 
exception_handler_v20|
|   | '}

2. Resize failed due to Step 1 failed action:
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 315, in 
decorated_function   |
|   |   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in 
__exit__  |
|   | six.reraise(self.type_, self.value, self.tb)  
   |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 292, in 
decorated_function   |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 382, in 
decorated_function   |
|   |   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in 
__exit__  |
|   | six.reraise(self.type_, self.value, self.tb)  
   |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 370, in 
decorated_function   |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3986, in 
finish_resize   |
|   |   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, in 
__exit__  |
|   | six.reraise(self.type_, self.value, self.tb)  
   |
|   |   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3974, in 
finish_resize   |
|   |   File 
"

[Yahoo-eng-team] [Bug 1836140] [NEW] 500 response while try to delete image is in uploading state

2019-07-10 Thread Abhishek Kekane
Public bug reported:

When image import fails during image is in uploaded from staging area
image remains in uploading state and data remains in staging area. In
this scenario if file store is not enabled then while deleting the image
glance-api returns 500 status code with error 'file' scheme is Unknwon.

 Traceback (most recent call last):
   File "/usr/lib/python2.7/site-packages/glance_store/backend.py", line 409, 
in delete_from_backend
 loc = location.get_location_from_uri(uri, conf=CONF)
   File "/usr/lib/python2.7/site-packages/glance_store/location.py", line 75, 
in get_location_from_uri
 raise exceptions.UnknownScheme(scheme=pieces.scheme)
 UnknownScheme: Unknown scheme 'file' found in URI


Note:
Solution is similar as proposed in this patch:
https://review.opendev.org/#/c/618468/7

** Affects: glance
 Importance: High
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: New

** Changed in: glance
   Importance: Undecided => High

** Changed in: glance
 Assignee: (unassigned) => Abhishek Kekane (abhishek-kekane)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1836140

Title:
  500 response while try to delete image is in uploading state

Status in Glance:
  New

Bug description:
  When image import fails during image is in uploaded from staging area
  image remains in uploading state and data remains in staging area. In
  this scenario if file store is not enabled then while deleting the
  image glance-api returns 500 status code with error 'file' scheme is
  Unknwon.

   Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/glance_store/backend.py", line 409, 
in delete_from_backend
   loc = location.get_location_from_uri(uri, conf=CONF)
 File "/usr/lib/python2.7/site-packages/glance_store/location.py", line 75, 
in get_location_from_uri
   raise exceptions.UnknownScheme(scheme=pieces.scheme)
   UnknownScheme: Unknown scheme 'file' found in URI


  Note:
  Solution is similar as proposed in this patch:
  https://review.opendev.org/#/c/618468/7

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1836140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832210] Re: fwaas netfilter_log: incorrect decode of log prefix under python 3

2019-07-10 Thread Launchpad Bug Tracker
This bug was fixed in the package neutron-fwaas - 1:14.0.0-0ubuntu1.1

---
neutron-fwaas (1:14.0.0-0ubuntu1.1) disco; urgency=medium

  [ Corey Bryant ]
  * d/gbp.conf: Create stable/stein branch.

  [ James Page ]
  * d/p/netfilter_log-Correct-decode-binary-types.patch: Cherry pick fix
to resolve decoding of netfilter log prefix information under Python
3 (LP: #1832210).

 -- Corey Bryant   Tue, 25 Jun 2019 10:27:58
+0100

** Changed in: neutron-fwaas (Ubuntu Disco)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1832210

Title:
  fwaas netfilter_log: incorrect decode of log prefix under python 3

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive rocky series:
  Fix Committed
Status in Ubuntu Cloud Archive stein series:
  Fix Committed
Status in Ubuntu Cloud Archive train series:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron-fwaas package in Ubuntu:
  Fix Released
Status in neutron-fwaas source package in Cosmic:
  Fix Committed
Status in neutron-fwaas source package in Disco:
  Fix Released
Status in neutron-fwaas source package in Eoan:
  Fix Released

Bug description:
  Under Python 3, the prefix of a firewall log message is not correctly
  decoded "b'10612530182266949194'":

  2019-06-10 09:14:34 Unknown cookie packet_in 
pkt=ethernet(dst='fa:16:3e:c6:58:5e',ethertype=2048,src='fa:16:3e:e0:2c:be')ipv4(csum=51290,dst='10.5.0.10',flags=2,header_length=5,identification=37612,offset=0,option=None,proto=6,src='192.168.21.182',tos=16,total_length=52,ttl=63,version=4)tcp(ack=3151291228,bits=17,csum=23092,dst_port=57776,offset=8,option=[TCPOptionNoOperation(kind=1,length=1),
 TCPOptionNoOperation(kind=1,length=1), 
TCPOptionTimestamps(kind=8,length=10,ts_ecr=1574746440,ts_val=482688)],seq=2769917228,src_port=22,urgent=0,window_size=3120)
  2019-06-10 09:14:34 {'prefix': "b'10612530182266949194'", 'msg': 
"ethernet(dst='fa:16:3e:c6:58:5e',ethertype=2048,src='fa:16:3e:e0:2c:be')ipv4(csum=51290,dst='10.5.0.10',flags=2,header_length=5,identification=37612,offset=0,option=None,proto=6,src='192.168.21.182',tos=16,total_length=52,ttl=63,version=4)tcp(ack=3151291228,bits=17,csum=23092,dst_port=57776,offset=8,option=[TCPOptionNoOperation(kind=1,length=1),
 TCPOptionNoOperation(kind=1,length=1), 
TCPOptionTimestamps(kind=8,length=10,ts_ecr=1574746440,ts_val=482688)],seq=2769917228,src_port=22,urgent=0,window_size=3120)"}
  2019-06-10 09:14:34 {'0bf81ded-bf94-437d-ad49-063bba9be9bb': 
[, 
]}

  This results in the firewall log driver not being able to map the
  message to the associated port and log resources in neutron resulting
  in the 'unknown cookie packet_in' warning message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1832210/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1834057] Re: change default quota command example is incorrect

2019-07-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/667164
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=426790237e75eb184191bee1de50ee493252a717
Submitter: Zuul
Branch:master

commit 426790237e75eb184191bee1de50ee493252a717
Author: Stephen Finucane 
Date:   Mon Jun 24 17:09:32 2019 +0100

docs: Correct issues with 'openstack quota set' commands

Change Ic857918b15496049b5ccacde9515f130cc0bd7e9 against
openstack-manuals updated the quotas document to use openstackclient
commands in place of novaclient commands. It missed the fact that you
need to pass the '--class' parameter if you wish to set a quota for a
class rather than a project. Correct this.

Change-Id: I5dc65924fee65f6340d1495a9b1b992001c30731
Signed-off-by: Stephen Finucane 
Closes-Bug: #1834057


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1834057

Title:
  change default quota command example is incorrect

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  In Progress
Status in OpenStack Compute (nova) rocky series:
  In Progress
Status in OpenStack Compute (nova) stein series:
  In Progress

Bug description:
  - [x] This doc is inaccurate in this way: should have --class in the
  change default quota command example

  
  If you have a troubleshooting or support issue, use the following  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2018-02-06 10:47:25
  SHA: f696660bf38693e492eec3e7a10bf80661ca6e60
  Source: https://opendev.org/openstack/nova/src/doc/source/admin/quotas.rst
  URL: https://docs.openstack.org/nova/latest/admin/quotas.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1834057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836105] [NEW] Instance does not start - Error during following call to agent: ovs-vsctl

2019-07-10 Thread Andre Ruiz
Private bug reported:

This is Openstack Queens on Bionic. The main difference from templates
is no neutron-gateway (provider network only) and use of DPDK.

Bundle used in deployment is at:
https://git.launchpad.net/cpe-deployments/tree/?h=2019-05-27-Telefonica-OCS-OP-152907

There are other issues under investigation about dpdk and checksumming but they 
don't seem related to this at first look.
https://bugs.launchpad.net/ubuntu/+source/dpdk/+bug/1833713

- Instances cannot be started once they are shutdown
- It's happening to every instance after the problem first appeared
- It's happening on different hosts
- Any try to start will timeout with errors in nova log (bellow)
- Nothing new appears in openvswitch logs with normal debugging level
- Nothing new appears on libvirt logs for the instance (last status is from 
last boot)

2019-07-10 13:40:42.013 19975 ERROR oslo_messaging.rpc.server
InternalError: Failure running os_vif plugin plug method: Failed to plug
VIF
VIFVHostUser(active=True,address=fa:16:3e:8e:8f:9b,has_traffic_filtering=False,id=ab6225f4-1cd8-43c7-8777-52c99ae80f67,mode='server',network=Network
(d8249c3d-03d9-44ac-8eae-fa967993c73d),path='/run/libvirt-vhost-
user/vhuab6225f4-1c',plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='vhuab6225f4-1c').
Got error: Error during following call to agent: ['ovs-vsctl', '--
timeout=120', '--', '--if-exists', 'del-port', u'vhuab6225f4-1c', '--',
'add-port', u'br-int', u'vhuab6225f4-1c', '--', 'set', 'Interface',
u'vhuab6225f4-1c', u'external-ids:iface-
id=ab6225f4-1cd8-43c7-8777-52c99ae80f67', 'external-ids:iface-
status=active', u'external-ids:attached-mac=fa:16:3e:8e:8f:9b', u
'external-ids:vm-uuid=5e46868f-8a52-4d70-b08a-9a320dc9821b',
'type=dpdkvhostuserclient', u'options:vhost-server-path=/run/libvirt-
vhost-user/vhuab6225f4-1c']

2019-07-10 13:43:05.511 19975 ERROR os_vif AgentError: Error during
following call to agent: ['ovs-vsctl', '--timeout=120', '--', '--if-
exists', 'del-port', u'vhuab6225f4-1c', '--', 'add-port', u'br-int',
u'vhuab6225f4-1c', '--', 'set', 'Interface', u'vhuab6225f4-1c', u
'external-ids:iface-id=ab6225f4-1cd8-43c7-8777-52c99ae80f67', 'external-
ids:iface-status=active', u'external-ids:attached-
mac=fa:16:3e:8e:8f:9b', u'external-ids:vm-uuid=5e46868f-8a52-4d70-b08a-
9a320dc9821b', 'type=dpdkvhostuserclient', u'options:vhost-server-
path=/run/libvirt-vhost-user/vhuab6225f4-1c']

Complete logs will follow.

** Affects: nova
 Importance: Undecided
 Status: New

** Information type changed from Public to Private

** Description changed:

- 
- This is Openstack Queens on Bionic. The main difference from templates is no 
neutron-gateway (provider network only) and use of DPDK.
+ This is Openstack Queens on Bionic. The main difference from templates
+ is no neutron-gateway (provider network only) and use of DPDK.
  
  Bundle used in deployment is at:
  
https://git.launchpad.net/cpe-deployments/tree/?h=2019-05-27-Telefonica-OCS-OP-152907
  
  There are other issues under investigation about dpdk and checksumming but 
they don't seem related to this at first look.
  https://bugs.launchpad.net/ubuntu/+source/dpdk/+bug/1833713
  
  - Instances cannot be started once they are shutdown
+ - It's happening to every instance after the problem first appeared
  - Any try to start will timeout with errors in nova log (bellow)
  - Nothing new appears in openvswitch logs with normal debugging level
  - Nothing new appears on libvirt logs for the instance (last status is from 
last boot)
  
  2019-07-10 13:40:42.013 19975 ERROR oslo_messaging.rpc.server
  InternalError: Failure running os_vif plugin plug method: Failed to plug
  VIF
  
VIFVHostUser(active=True,address=fa:16:3e:8e:8f:9b,has_traffic_filtering=False,id=ab6225f4-1cd8-43c7-8777-52c99ae80f67,mode='server',network=Network
  (d8249c3d-03d9-44ac-8eae-fa967993c73d),path='/run/libvirt-vhost-
  
user/vhuab6225f4-1c',plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='vhuab6225f4-1c').
  Got error: Error during following call to agent: ['ovs-vsctl', '--
  timeout=120', '--', '--if-exists', 'del-port', u'vhuab6225f4-1c', '--',
  'add-port', u'br-int', u'vhuab6225f4-1c', '--', 'set', 'Interface',
  u'vhuab6225f4-1c', u'external-ids:iface-
  id=ab6225f4-1cd8-43c7-8777-52c99ae80f67', 'external-ids:iface-
  status=active', u'external-ids:attached-mac=fa:16:3e:8e:8f:9b', u
  'external-ids:vm-uuid=5e46868f-8a52-4d70-b08a-9a320dc9821b',
  'type=dpdkvhostuserclient', u'options:vhost-server-path=/run/libvirt-
  vhost-user/vhuab6225f4-1c']
  
  2019-07-10 13:43:05.511 19975 ERROR os_vif AgentError: Error during
  following call to agent: ['ovs-vsctl', '--timeout=120', '--', '--if-
  exists', 'del-port', u'vhuab6225f4-1c', '--', 'add-port', u'br-int',
  u'vhuab6225f4-1c', '--', 'set', 'Interface', u'vhuab6225f4-1c', u
  'external-ids:iface-id=ab6225f4-1cd8-43c7-8777-52c99ae80f67', 'external-
  ids:iface-status=active', u

[Yahoo-eng-team] [Bug 1816086] Re: Resource Tracker performance with Ironic driver

2019-07-10 Thread Matt Riedemann
** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Changed in: nova/rocky
   Status: New => Confirmed

** Changed in: nova/stein
   Status: New => Confirmed

** Changed in: nova/rocky
   Importance: Undecided => Medium

** Changed in: nova/stein
   Importance: Undecided => Medium

** Changed in: nova/rocky
   Importance: Medium => High

** Changed in: nova/stein
   Importance: Medium => High

** Tags added: ironic performance resource-tracker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1816086

Title:
  Resource Tracker performance with Ironic driver

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed

Bug description:
  The problem is in rocky.

  The resource tracker builds the resource provider tree and it's updated 2 
times in "_update_available_resource". 
  With "_init_compute_node" and in the "_update_available_resource" itself.

  The problem is that the RP tree will contain all the ironic RP and all
  the tree is flushed to placement (2 times as described above) when the
  periodic task iterate per Ironic RP.

  In our case with 1700 ironic nodes, the period task takes:
  1700 x (2 x 7s) = ~6h

  +++

  mitigations:
  - shard nova-compute. Have several nova-computes dedicated to ironic.
  Most of the current deployments only use 1 nova-compute to avoid resources 
shuffle/recreation between nova-computes.
  Several nova-computes will be need to accommodate the load.

  - why do we need to do the full resource provider tree flush to placement and 
not only the RP that is being considered?
  As a work around we are doing this now!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1816086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836095] [NEW] Improve "OVSFirewallDriver.process_trusted_ports"

2019-07-10 Thread Rodolfo Alonso
Public bug reported:

When "OVSFirewallDriver.process_trusted_ports" is called with many
ports, "_initialize_egress_no_port_security" retrieves the VIF ports
("Interface" registers in OVS DB), one per iteration, based in the
port_id. Instead of this procedure, if the DB is called only once to
retrieve all the VIF ports, the performance increase is noticeable.
E.g.: bridge with 1000 ports and interfaces.

port_ids = ['id%s' % i for i in range(1, 1000)]
ts1 = timeutils.utcnow_ts(microsecond=True)
vifs = ovs.get_vifs_by_ids(port_ids)
ts2 = timeutils.utcnow_ts(microsecond=True)
print("Time lapsed: %s" % str(ts2 - ts1))

ts1 = timeutils.utcnow_ts(microsecond=True)
for i in range(1, 1000):
id = "id%s" % i
vif = ovs.get_vif_port_by_id(id)
ts2 = timeutils.utcnow_ts(microsecond=True)
print("Time lapsed: %s" % str(ts2 - ts1))


Retrieving 100 ports:
- Bulk operation: 0.08 secs
- Loop operation: 5.6 secs

Retrieving 300 ports:
- Bulk operation: 0.08 secs
- Loop operation: 16.44 secs

Retrieving 300 ports:
- Bulk operation: 0.08 secs
- Loop operation: 59 secs

[1]https://github.com/openstack/neutron/blob/06754907e241af76570f19301093c2abab97e627/neutron/agent/linux/openvswitch_firewall/firewall.py#L667
[2]https://github.com/openstack/neutron/blob/06754907e241af76570f19301093c2abab97e627/neutron/agent/linux/openvswitch_firewall/firewall.py#L747

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1836095

Title:
  Improve "OVSFirewallDriver.process_trusted_ports"

Status in neutron:
  New

Bug description:
  When "OVSFirewallDriver.process_trusted_ports" is called with many
  ports, "_initialize_egress_no_port_security" retrieves the VIF ports
  ("Interface" registers in OVS DB), one per iteration, based in the
  port_id. Instead of this procedure, if the DB is called only once to
  retrieve all the VIF ports, the performance increase is noticeable.
  E.g.: bridge with 1000 ports and interfaces.

  port_ids = ['id%s' % i for i in range(1, 1000)]
  ts1 = timeutils.utcnow_ts(microsecond=True)
  vifs = ovs.get_vifs_by_ids(port_ids)
  ts2 = timeutils.utcnow_ts(microsecond=True)
  print("Time lapsed: %s" % str(ts2 - ts1))

  ts1 = timeutils.utcnow_ts(microsecond=True)
  for i in range(1, 1000):
  id = "id%s" % i
  vif = ovs.get_vif_port_by_id(id)
  ts2 = timeutils.utcnow_ts(microsecond=True)
  print("Time lapsed: %s" % str(ts2 - ts1))

  
  Retrieving 100 ports:
  - Bulk operation: 0.08 secs
  - Loop operation: 5.6 secs

  Retrieving 300 ports:
  - Bulk operation: 0.08 secs
  - Loop operation: 16.44 secs

  Retrieving 300 ports:
  - Bulk operation: 0.08 secs
  - Loop operation: 59 secs

  
[1]https://github.com/openstack/neutron/blob/06754907e241af76570f19301093c2abab97e627/neutron/agent/linux/openvswitch_firewall/firewall.py#L667
  
[2]https://github.com/openstack/neutron/blob/06754907e241af76570f19301093c2abab97e627/neutron/agent/linux/openvswitch_firewall/firewall.py#L747

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1836095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501447] Re: QEMU built-in iscsi initiator support should be version-constrained in the driver

2019-07-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/668750
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b8c55d1d3c3b9dfd3b962e092e4fd80f2b0dfac3
Submitter: Zuul
Branch:master

commit b8c55d1d3c3b9dfd3b962e092e4fd80f2b0dfac3
Author: Lee Yarwood 
Date:   Tue Jul 2 20:14:21 2019 +0100

libvirt: Remove unreachable native QEMU iSCSI initiator config code

Ieb9a03d308495be4e8c54b5c6c0ff781ea7f0559 introduced support for using
QEMU's native iSCSI initiator support way back in Kilo. However this was
only enabled when the LibvirtNetVolumeDriver class was configured as the
``iscsi`` volume driver via the ``[libvirt]/volume_drivers`` configurable.

Unfortunately this configurable was removed in Liberty by
I832820499ec3304132379ad9b9d1ee92c5a75b61 essentially rendering this
``iscsi`` based  code path dead ever since unless operators manually
hacked the now static ``libvirt_volume_drivers`` list within driver.py.

As a result of this and a complete lack of any test coverage in the gate
we can now remove this unreachable code from Nova. It might be desirable
to reintroduce this support later but this should take the form of an
extracted volume driver and a new configurable within nova.conf to
switch between the two available drivers.

Closes-Bug: #1501447
Change-Id: I1043287fe8063c4b2af07c997a931a7097518ca9


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501447

Title:
  QEMU built-in iscsi initiator support should be version-constrained in
  the driver

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This spec was approved in kilo:

  http://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented
  /qemu-built-in-iscsi-initiator.html

  With the code change here:

  https://review.openstack.org/#/c/135854/

  The spec and code change says:

  "QEMU binary of Ubuntu 14.04 doesn’t have iSCSI support. Users have to
  install libiscsi2 package and libiscsi-dev from Debian and rebuild
  QEMU binary with libiscsi support by themselves."

  This is a pretty terrible way of determining if this can be supported.
  It also basically says if you're not using ubuntu/debian you're on
  your own for figuring out what version of qemu (and what version your
  distro supports) is required to make this work.

  This should have really had a version constraint in the driver code
  such that if the version of qemu is not new enough we can't support
  the volume backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1816086] Re: Resource Tracker performance with Ironic driver

2019-07-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/637225
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=8c797450cbff5194fb6791cd0a07fa060dc8af72
Submitter: Zuul
Branch:master

commit 8c797450cbff5194fb6791cd0a07fa060dc8af72
Author: Eric Fried 
Date:   Fri Feb 15 10:54:36 2019 -0600

Perf: Use dicts for ProviderTree roots

ProviderTree used to keep track of root providers in a list. Since we
don't yet have sharing providers, this would always be a list of one for
non-ironic deployments, or N for ironic deployments of N nodes.

To find a provider (by name or UUID), we would iterate over this list,
an O(N) operation. For large ironic deployments, this added up fast -
see the referenced bug.

With this change, we store roots in two dicts: one keyed by UUID, one
keyed by name. To find a provider, we first check these dicts. If the
provider we're looking for is a root, this is now O(1). (If it's a
child, it would still be O(N), because we iterate over all the roots
looking for a descendant that matches. But ironic deployments don't have
child providers (yet?) (right?) so that should be n/a. For non-ironic
deployments it's unchanged: O(M) where M is the number of descendants,
which should be very small for the time being.)

Test note: Existing tests in nova.tests.unit.compute.test_provider_tree
thoroughly cover all the affected code paths. There was one usage of
ProviderTree.roots that was untested and broken (even before this
change) which is now fixed.

Change-Id: Ibf430a8bc2a2af9353b8cdf875f8506377a1c9c2
Closes-Bug: #1816086


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1816086

Title:
  Resource Tracker performance with Ironic driver

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The problem is in rocky.

  The resource tracker builds the resource provider tree and it's updated 2 
times in "_update_available_resource". 
  With "_init_compute_node" and in the "_update_available_resource" itself.

  The problem is that the RP tree will contain all the ironic RP and all
  the tree is flushed to placement (2 times as described above) when the
  periodic task iterate per Ironic RP.

  In our case with 1700 ironic nodes, the period task takes:
  1700 x (2 x 7s) = ~6h

  +++

  mitigations:
  - shard nova-compute. Have several nova-computes dedicated to ironic.
  Most of the current deployments only use 1 nova-compute to avoid resources 
shuffle/recreation between nova-computes.
  Several nova-computes will be need to accommodate the load.

  - why do we need to do the full resource provider tree flush to placement and 
not only the RP that is being considered?
  As a work around we are doing this now!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1816086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836005] Re: doc: wrong parameter of NotificationPublisher in the notifications document

2019-07-10 Thread Takashi NATSUME
** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1836005

Title:
  doc: wrong parameter of NotificationPublisher in the notifications
  document

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  New
Status in OpenStack Compute (nova) stein series:
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way:

  It can be created by instantiating the NotificationPublisher object
  with a host and a binary string parameter

  'binary' should be 'source'.

  
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2019-06-05 13:10:03
  SHA: 93cae754cff2317a3ba84267e805a2e317960d4f
  Source: 
https://opendev.org/openstack/nova/src/doc/source/reference/notifications.rst
  URL: https://docs.openstack.org/nova/latest/reference/notifications.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1836005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836065] [NEW] libvirt Error - No more available PCI slots

2019-07-10 Thread Dvir Saidof
Public bug reported:

Initially, the problem was that I couldn't attach more than 25 volumes
to a single VM. I would receive the following error in the "nova-
compute.log" when trying to attach the 26th volume.

ERROR nova.virt.libvirt.driver [req-39ac33eb-e924-4f51-ae7a-e2847505f756
d169224fc7764363895d00fe9d13fb16 2fb563a2af214ebb90efd30f0033de66 -
default default] [instance: 593b3e25-4c02-40dd-acc9-6ee554b0bdd2] Failed
to attach volume at mountpoint: /dev/vd{: libvirtError: XML error:
Unknown disk name 'vd{' and no address specified

This issue has been discussed and solved not long ago and I overcame that and 
managed to attach the 26th volume with id "vdaa" backporting from here: 
(1) https://review.opendev.org/#/c/631166/
(2) https://review.opendev.org/#/c/616777/

The problem now is that I've got a new error, trying to attach the 27th volume. 
This is the error I get in "nova-compute.log":
2019-07-03 09:30:55.657 8 ERROR oslo_messaging.rpc.server libvirtError: 
internal error: No more available PCI slots


Steps to reproduce on rocky version:
- update block_device.py, utils.py, compute.py and blockinfo.py on nova-compute 
docker from (1) and (2).
- attach 27 volumes to a VM

Expected results: 
- 27 volumes being attached

Actual result:
- only 26 attached to the VM and the following error on nova-compute.log
2019-07-03 09:30:55.657 8 ERROR oslo_messaging.rpc.server libvirtError: 
internal error: No more available PCI slots

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1836065

Title:
  libvirt Error - No more available PCI slots

Status in OpenStack Compute (nova):
  New

Bug description:
  Initially, the problem was that I couldn't attach more than 25 volumes
  to a single VM. I would receive the following error in the "nova-
  compute.log" when trying to attach the 26th volume.

  ERROR nova.virt.libvirt.driver [req-39ac33eb-e924-4f51-ae7a-
  e2847505f756 d169224fc7764363895d00fe9d13fb16
  2fb563a2af214ebb90efd30f0033de66 - default default] [instance:
  593b3e25-4c02-40dd-acc9-6ee554b0bdd2] Failed to attach volume at
  mountpoint: /dev/vd{: libvirtError: XML error: Unknown disk name 'vd{'
  and no address specified

  This issue has been discussed and solved not long ago and I overcame that and 
managed to attach the 26th volume with id "vdaa" backporting from here: 
  (1) https://review.opendev.org/#/c/631166/
  (2) https://review.opendev.org/#/c/616777/

  The problem now is that I've got a new error, trying to attach the 27th 
volume. This is the error I get in "nova-compute.log":
  2019-07-03 09:30:55.657 8 ERROR oslo_messaging.rpc.server libvirtError: 
internal error: No more available PCI slots

  
  Steps to reproduce on rocky version:
  - update block_device.py, utils.py, compute.py and blockinfo.py on 
nova-compute docker from (1) and (2).
  - attach 27 volumes to a VM

  Expected results: 
  - 27 volumes being attached

  Actual result:
  - only 26 attached to the VM and the following error on nova-compute.log
  2019-07-03 09:30:55.657 8 ERROR oslo_messaging.rpc.server libvirtError: 
internal error: No more available PCI slots

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1836065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836062] [NEW] change VM QUOTA fails

2019-07-10 Thread Noam Assouline
Public bug reported:

Steps to reproduce:

Identity -> Projects and under 'Manage Members':
1. Select "Modify Quotas"
2. Change nothing 
3. Save

Expected result: screen is closed and updated without problems

Actual result: and error displayed that requires to fill in fields in
"Share" tab that are empty

it seems like default quota fields for manila share cannot be changed or
handled.

Additional information:
1.
Logs shows that it fails on horizon hwew:

File "/usr/share/openstack-
dashboard/openstack_dashboard/usage/quotas.py", line 243, in
get_disabled_quotas

% set(targets) - QUOTA_FIELDS)

Looking at quotas.py,
line 91:
QUOTA_FIELDS = NOVA_QUOTA_FIELDS | CINDER_QUOTA_FIELDS | NEUTRON_QUOTA_FIELDS

we can see that QUOTA_FIELDS doesn't include MANILA_QUOTA_FIELDS.

2.In Admin -> System -> Defaults, there are 4 tabs:
Compute quotas, Volume quotas, Network quotas and Share quotas.

for some reason, "Share quotas" cannot be updated with new values.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1836062

Title:
  change VM QUOTA fails

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:

  Identity -> Projects and under 'Manage Members':
  1. Select "Modify Quotas"
  2. Change nothing 
  3. Save

  Expected result: screen is closed and updated without problems

  Actual result: and error displayed that requires to fill in fields in
  "Share" tab that are empty

  it seems like default quota fields for manila share cannot be changed
  or handled.

  Additional information:
  1.
  Logs shows that it fails on horizon hwew:

  File "/usr/share/openstack-
  dashboard/openstack_dashboard/usage/quotas.py", line 243, in
  get_disabled_quotas

  % set(targets) - QUOTA_FIELDS)

  Looking at quotas.py,
  line 91:
  QUOTA_FIELDS = NOVA_QUOTA_FIELDS | CINDER_QUOTA_FIELDS | NEUTRON_QUOTA_FIELDS

  we can see that QUOTA_FIELDS doesn't include MANILA_QUOTA_FIELDS.

  2.In Admin -> System -> Defaults, there are 4 tabs:
  Compute quotas, Volume quotas, Network quotas and Share quotas.

  for some reason, "Share quotas" cannot be updated with new values.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1836062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836037] [NEW] Routed provider networks nova inventory update fails

2019-07-10 Thread Lajos Katona
Public bug reported:

The patch https://review.opendev.org/663980 introduced a serious misreading of 
placement API.
The lines 
https://review.opendev.org/#/c/663980/2/neutron/services/segments/plugin.py@220 
assumes that "Show resource provider inventory" (see: 
https://developer.openstack.org/api-ref/placement/?expanded=show-resource-provider-inventory-detail#show-resource-provider-inventory)
 returns a dict with like 
{'IPV4_ADDRESS': {'allocation_ratio': 42}}
but if we read the documentation the truth is that the response is a dict like:
{'allocation_ratio': 42}

The other fix in that patch is good as it is
(https://review.opendev.org/#/c/663980/2/neutron/services/segments/plugin.py@255)
for "Update resource provider inventories" (see:
https://developer.openstack.org/api-ref/placement/?expanded=update-
resource-provider-inventories-detail#update-resource-provider-
inventories)

** Affects: neutron
 Importance: Undecided
 Assignee: Lajos Katona (lajos-katona)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Lajos Katona (lajos-katona)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1836037

Title:
  Routed provider networks nova inventory update fails

Status in neutron:
  New

Bug description:
  The patch https://review.opendev.org/663980 introduced a serious misreading 
of placement API.
  The lines 
https://review.opendev.org/#/c/663980/2/neutron/services/segments/plugin.py@220 
assumes that "Show resource provider inventory" (see: 
https://developer.openstack.org/api-ref/placement/?expanded=show-resource-provider-inventory-detail#show-resource-provider-inventory)
 returns a dict with like 
  {'IPV4_ADDRESS': {'allocation_ratio': 42}}
  but if we read the documentation the truth is that the response is a dict 
like:
  {'allocation_ratio': 42}

  The other fix in that patch is good as it is
  
(https://review.opendev.org/#/c/663980/2/neutron/services/segments/plugin.py@255)
  for "Update resource provider inventories" (see:
  https://developer.openstack.org/api-ref/placement/?expanded=update-
  resource-provider-inventories-detail#update-resource-provider-
  inventories)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1836037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836005] Re: doc: wrong parameter of NotificationPublisher in the notifications document

2019-07-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/669993
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=de31466fdb09dd367b23660f71b4ca06f83271a2
Submitter: Zuul
Branch:master

commit de31466fdb09dd367b23660f71b4ca06f83271a2
Author: Takashi NATSUME 
Date:   Wed Jul 10 16:06:22 2019 +0900

doc: Fix a parameter of NotificationPublisher

The 'binary' parameter has been changed to the 'source'
since I95b5b0826190d396efe7bfc017f6081a6356da65.
But the notification document has not been updated yet.

Replace the 'binary' parameter with the 'source' parameter.

Change-Id: I141c90ac27d16f2e9c033bcd2f95ac08904a2f52
Closes-Bug: #1836005


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1836005

Title:
  doc: wrong parameter of NotificationPublisher in the notifications
  document

Status in OpenStack Compute (nova):
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way:

  It can be created by instantiating the NotificationPublisher object
  with a host and a binary string parameter

  'binary' should be 'source'.

  
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2019-06-05 13:10:03
  SHA: 93cae754cff2317a3ba84267e805a2e317960d4f
  Source: 
https://opendev.org/openstack/nova/src/doc/source/reference/notifications.rst
  URL: https://docs.openstack.org/nova/latest/reference/notifications.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1836005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836029] [NEW] Got some db errors when call list servers

2019-07-10 Thread zhangyujun
Public bug reported:

Got some db errors when call list servers and nova-api return http code
500

2019-07-02 10:08:39 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:39.736 16 INFO nova.osapi_compute.wsgi.server 
[req-5804b3dd-9968-481d-b03f-ed36192abb2f 1c827fbc129a4025bcb2a2d8cacc6b3d 
d8ac61ac9f1d4e5e9aa9c4313c668834 - default default] 10.233.66.23 "GET 
/v2.1/d8ac61ac9f1d4e5e9aa9c4313c668834/servers/detail?project_id=d8ac61ac9f1d4e5e9aa9c4313c668834&redirect=detail_x
 HTTP/1.1" status: 200 len: 42583 time: 0.2910399
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters 
[req-edd4d5e5-05ee-4a94-974b-c127fc4ce86f 1c827fbc129a4025bcb2a2d8cacc6b3d 
d8ac61ac9f1d4e5e9aa9c4313c668834 - default default] DB exception wrapped.
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters Traceback (most recent 
call last):
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in 
_execute_context
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters context)
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in 
do_execute
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 167, in execute
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/site-packages/pymysql/cursors.py", line 323, in _query
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters conn.query(q)
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 836, in query
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters self._affected_rows = 
self._read_query_result(unbuffered=unbuffered)
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1020, in 
_read_query_result
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters result.read()
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 1303, in read
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters first_packet = 
self.connection._read_packet()
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 962, in 
_read_packet
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters packet_header = 
self._read_bytes(4)
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 989, in 
_read_bytes
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters data = 
self._rfile.read(num_bytes)
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters RuntimeError: reentrant 
call inside <_io.BufferedReader name=23>
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.784 16 ERROR oslo_db.sqlalchemy.exc_filters 
2019-07-02 10:08:42 +0800 | nova-api-osapi-5475cd965-9wbl2 | 2019-07-02 
10:08:42.785 16 ERROR nova.api.openstack.extensions 
[req-edd4d5e5-05ee-4a94-974b-c127fc4ce86f 1c827fbc129a4025bcb2a2d8cacc6b3d 
d8ac61ac9f1d4e5e9aa9c4313c668834 - default default] Unexpected exceptio

[Yahoo-eng-team] [Bug 1836028] [NEW] Functional Test script results in an authentication error

2019-07-10 Thread Sara Nierodzik
Public bug reported:

The script is the "configure_for_func_testing.sh" located in 
/opt/stack/neutron/tools folder.
When the guidelines are followed from the following 
link:https://docs.openstack.org/neutron/ocata/devref/development.environment.html
 the test script displays the following error:

ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using
password: YES)

Possibly due to the version of MySQL version being used:
"Since version 5.7, MySQL is secure-by-default: a random root password is 
generated upon installation; you need to read this password from the server 
log. you have to change this password the first time you connect. you cannot 
use a blank password because of the validate_password plugin."

The fix for this is to include the already set password in order to
access the MySQL database.

** Affects: neutron
 Importance: Undecided
 Assignee: Sara Nierodzik (snierodz)
 Status: Fix Committed

** Changed in: neutron
 Assignee: (unassigned) => Sara Nierodzik (snierodz)

** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Status: Incomplete => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1836028

Title:
  Functional Test script results in an authentication error

Status in neutron:
  Fix Committed

Bug description:
  The script is the "configure_for_func_testing.sh" located in 
/opt/stack/neutron/tools folder.
  When the guidelines are followed from the following 
link:https://docs.openstack.org/neutron/ocata/devref/development.environment.html
 the test script displays the following error:

  ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using
  password: YES)

  Possibly due to the version of MySQL version being used:
  "Since version 5.7, MySQL is secure-by-default: a random root password is 
generated upon installation; you need to read this password from the server 
log. you have to change this password the first time you connect. you cannot 
use a blank password because of the validate_password plugin."

  The fix for this is to include the already set password in order to
  access the MySQL database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1836028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836023] [NEW] OVS agent "hangs" while processing trusted ports

2019-07-10 Thread Oleg Bondarev
Public bug reported:

Queens, ovsdb native interface.

On a loaded gtw node hosting > 1000 ports when restarting neutron-
openvswitch-agent at some moment agent stops sending state reports and
do any logging for a significant time, depending on number of ports. In
our case gtw node hosts > 1400 ports and agent hangs for ~100 seconds.
Thus if configured agent_down_time is less that 100 seconds, neutron
server sees agent as down, starts resources rescheduling. After agent
stops hanging it sees itself as "revived" and starts new full sync. This
loop is almost endless.

Debug showed the culprit is process_trusted_ports:
https://github.com/openstack/neutron/blob/13.0.4/neutron/agent/linux/openvswitch_firewall/firewall.py#L655
- this func does not yield control to other greenthreads and blocks
until all trusted ports are processed. Since on gateway nodes almost al
ports are "trusted" (router and dhcp ports) process_trusted_ports may
take significant time.

The proposal would be to add greenlet.sleep(0) inside loop in
process_trusted_ports - that fixed the issue on our environment.

** Affects: neutron
 Importance: High
 Assignee: Oleg Bondarev (obondarev)
 Status: In Progress


** Tags: ovs-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1836023

Title:
  OVS agent "hangs" while processing trusted ports

Status in neutron:
  In Progress

Bug description:
  Queens, ovsdb native interface.

  On a loaded gtw node hosting > 1000 ports when restarting neutron-
  openvswitch-agent at some moment agent stops sending state reports and
  do any logging for a significant time, depending on number of ports.
  In our case gtw node hosts > 1400 ports and agent hangs for ~100
  seconds. Thus if configured agent_down_time is less that 100 seconds,
  neutron server sees agent as down, starts resources rescheduling.
  After agent stops hanging it sees itself as "revived" and starts new
  full sync. This loop is almost endless.

  Debug showed the culprit is process_trusted_ports:
  
https://github.com/openstack/neutron/blob/13.0.4/neutron/agent/linux/openvswitch_firewall/firewall.py#L655
  - this func does not yield control to other greenthreads and blocks
  until all trusted ports are processed. Since on gateway nodes almost
  al ports are "trusted" (router and dhcp ports) process_trusted_ports
  may take significant time.

  The proposal would be to add greenlet.sleep(0) inside loop in
  process_trusted_ports - that fixed the issue on our environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1836023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836015] [NEW] [neutron-fwaas]firewall goup status is inactive when updating policy in fwg

2019-07-10 Thread zhanghao
Public bug reported:

[root@controller neutron]# openstack firewall group show fwg1
+---+---+
| Field | Value |
+---+---+
| Description   |   |
| Egress Policy ID  | 57a7506f-f841-4679-bf90-e1e33ccc9dc7  |
| ID| f4558994-d207-4183-a077-ea7837574ccf  |
| Ingress Policy ID | 57a7506f-f841-4679-bf90-e1e33ccc9dc7  |
| Name  | fwg1  |
| Ports | [u'139e9560-9b72-4135-a3d4-94bf7cafbd6a'] |
| Project   | 8c91479bacc64574b828d4809e2d23c2  |
| Shared| False |
| State | UP|
| Status| ACTIVE|
| project_id| 8c91479bacc64574b828d4809e2d23c2  |
+---+---+

openstack firewall group set fwg1 --no-ingress-firewall-policy

[root@controller neutron]# openstack firewall group show fwg1
+---+---+
| Field | Value |
+---+---+
| Description   |   |
| Egress Policy ID  | 57a7506f-f841-4679-bf90-e1e33ccc9dc7  |
| ID| f4558994-d207-4183-a077-ea7837574ccf  |
| Ingress Policy ID | None  |
| Name  | fwg1  |
| Ports | [u'139e9560-9b72-4135-a3d4-94bf7cafbd6a'] |
| Project   | 8c91479bacc64574b828d4809e2d23c2  |
| Shared| False |
| State | UP|
| Status| INACTIVE  |
| project_id| 8c91479bacc64574b828d4809e2d23c2  |
+---+---+

iptables in the router namespace has not changed.

** Affects: neutron
 Importance: Undecided
 Assignee: zhanghao (zhanghao2)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => zhanghao (zhanghao2)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1836015

Title:
  [neutron-fwaas]firewall goup status is inactive when updating policy
  in fwg

Status in neutron:
  In Progress

Bug description:
  [root@controller neutron]# openstack firewall group show fwg1
  +---+---+
  | Field | Value |
  +---+---+
  | Description   |   |
  | Egress Policy ID  | 57a7506f-f841-4679-bf90-e1e33ccc9dc7  |
  | ID| f4558994-d207-4183-a077-ea7837574ccf  |
  | Ingress Policy ID | 57a7506f-f841-4679-bf90-e1e33ccc9dc7  |
  | Name  | fwg1  |
  | Ports | [u'139e9560-9b72-4135-a3d4-94bf7cafbd6a'] |
  | Project   | 8c91479bacc64574b828d4809e2d23c2  |
  | Shared| False |
  | State | UP|
  | Status| ACTIVE|
  | project_id| 8c91479bacc64574b828d4809e2d23c2  |
  +---+---+

  openstack firewall group set fwg1 --no-ingress-firewall-policy

  [root@controller neutron]# openstack firewall group show fwg1
  +---+---+
  | Field | Value |
  +---+---+
  | Description   |   |
  | Egress Policy ID  | 57a7506f-f841-4679-bf90-e1e33ccc9dc7  |
  | ID| f4558994-d207-4183-a077-ea7837574ccf  |
  | Ingress Policy ID | None  |
  | Name  | fwg1  |
  | Ports | [u'139e9560-9b72-4135-a3d4-94bf7cafbd6a'] |
  | Project   | 8c91479bacc64574b828d4809e2d23c2  |
  | Shared| False |
  | State | UP|
  | Status| INACTIVE  |
  | project_id| 8c91479bacc64574b828d4809e2d23c2  |
  +---+---+

  iptables in the router na

[Yahoo-eng-team] [Bug 1754104] Re: install guide: keystone_authtoken/auth_url shows incorrect port

2019-07-10 Thread zengjia
** Also affects: swift
   Importance: Undecided
   Status: New

** Changed in: swift
 Assignee: (unassigned) => zengjia (zengjia87)

** Changed in: swift
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1754104

Title:
  install guide: keystone_authtoken/auth_url shows incorrect port

Status in Cinder:
  In Progress
Status in Glance:
  Fix Released
Status in Glance queens series:
  Fix Released
Status in Manila:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way: The code shown for the 
keystone_authtoken needs an update regarding the ports for queens. Following 
the guides, keystone only listens on port 5000 instead of 5000 & 35357
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output. 
  Input:
  [keystone_authtoken]
  # ...
  auth_uri = http://controller:5000
  auth_url = http://controller:35357
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = glance
  password = GLANCE_PASS

  output:
  [keystone_authtoken]
  # ...
  auth_uri = http://controller:5000
  auth_url = http://controller:5000
  memcached_servers = controller:11211
  auth_type = password
  project_domain_name = default
  user_domain_name = default
  project_name = service
  username = glance
  password = GLANCE_PASS

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 16.0.1.dev1 on 'Thu Mar 1 07:26:57 2018, commit 968f4ae'
  SHA: 968f4ae9ce244d9372cb3e8f45acea9d557f317d
  Source: 
https://git.openstack.org/cgit/openstack/glance/tree/doc/source/install/install-ubuntu.rst
  URL: https://docs.openstack.org/glance/queens/install/install-ubuntu.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1754104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836005] [NEW] doc: wrong parameter of NotificationPublisher in the notifications document

2019-07-10 Thread Takashi NATSUME
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [X] This doc is inaccurate in this way:

It can be created by instantiating the NotificationPublisher object with
a host and a binary string parameter

'binary' should be 'source'.


- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release:  on 2019-06-05 13:10:03
SHA: 93cae754cff2317a3ba84267e805a2e317960d4f
Source: 
https://opendev.org/openstack/nova/src/doc/source/reference/notifications.rst
URL: https://docs.openstack.org/nova/latest/reference/notifications.html

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: In Progress


** Tags: doc notifications

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1836005

Title:
  doc: wrong parameter of NotificationPublisher in the notifications
  document

Status in OpenStack Compute (nova):
  In Progress

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way:

  It can be created by instantiating the NotificationPublisher object
  with a host and a binary string parameter

  'binary' should be 'source'.

  
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2019-06-05 13:10:03
  SHA: 93cae754cff2317a3ba84267e805a2e317960d4f
  Source: 
https://opendev.org/openstack/nova/src/doc/source/reference/notifications.rst
  URL: https://docs.openstack.org/nova/latest/reference/notifications.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1836005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp