[Yahoo-eng-team] [Bug 1930499] [NEW] Can't add a subnet without gateway ip to a router

2021-06-01 Thread Takashi Kajinami
Public bug reported:

Currently when adding an interface to a router, Horizon shows subnets
with gateway ip.

On the other hand, Horizon supports attaching a subnet to a router with fixed 
interface ip.
In this case gateway ip is not required but a new port is created with a given 
ip in the subnet and the port is attached to the router.
This is useful especially when users want to connect an instance to multiple 
subnets with a single default gateway (only one subnet has gateway ip while the 
other subnet don't).

However because of the current logic to filter out subnets without
gateway ip, we can't add a subnet without gateway ip to a router using
fixed ip now.

** Affects: horizon
 Importance: Undecided
 Status: In Progress

** Description changed:

  Currently when adding an interface to a router, Horizon shows subnets
  with gateway ip.
  
  On the other hand, Horizon supports attaching a subnet to a router with fixed 
interface ip.
- In this case gateway ip is not required but a new port is created with a 
given ip in the subnet
- and the port is attached to the router.
+ In this case gateway ip is not required but a new port is created with a 
given ip in the subnet and the port is attached to the router.
  This is useful especially when users want to connect an instance to multiple 
subnets with a single default gateway (only one subnet has gateway ip while the 
other subnet don't).
  
- However because of the current logic to filter out subnets without gateway 
ip, we can't add
- a subnet without gateway ip to a router using fixed ip now.
+ However because of the current logic to filter out subnets without
+ gateway ip, we can't add a subnet without gateway ip to a router using
+ fixed ip now.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1930499

Title:
  Can't add a subnet without gateway ip to a router

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Currently when adding an interface to a router, Horizon shows subnets
  with gateway ip.

  On the other hand, Horizon supports attaching a subnet to a router with fixed 
interface ip.
  In this case gateway ip is not required but a new port is created with a 
given ip in the subnet and the port is attached to the router.
  This is useful especially when users want to connect an instance to multiple 
subnets with a single default gateway (only one subnet has gateway ip while the 
other subnet don't).

  However because of the current logic to filter out subnets without
  gateway ip, we can't add a subnet without gateway ip to a router using
  fixed ip now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1930499/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1919177] Re: Azure: issues with accelerated networking on Hirsute

2021-06-01 Thread Balint Reczey
@ddstreed Could you please look into this SRU candidate?

@gjolly What is the importance/urgency of landing this fix?

** Also affects: cloud-init (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: systemd (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Also affects: linux-azure (Ubuntu Hirsute)
   Importance: Undecided
   Status: New

** Changed in: systemd (Ubuntu)
   Status: Incomplete => Fix Released

** Changed in: linux-azure (Ubuntu Hirsute)
   Status: New => Invalid

** Changed in: cloud-init (Ubuntu Hirsute)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1919177

Title:
  Azure: issues with accelerated networking on Hirsute

Status in cloud-init:
  Incomplete
Status in cloud-init package in Ubuntu:
  Incomplete
Status in linux-azure package in Ubuntu:
  Invalid
Status in systemd package in Ubuntu:
  Fix Released
Status in cloud-init source package in Hirsute:
  Incomplete
Status in linux-azure source package in Hirsute:
  Invalid
Status in systemd source package in Hirsute:
  New

Bug description:
  [General]

  On Azure, when provisioning a Hirsute VM with Accelerated Networking
  enabled, sometimes part of the cloud-init configuration is not
  applied.

  Especially, in those cases, the public SSH key is not setup properly.

  [how to reproduce]

  Start a VM with AN enabled:

  ```
  az vm create --name "$VM_NAME --resource-group "$GROUP" --location "UK South" 
 --image 
'Canonical:0001-com-ubuntu-server-hirsute-daily:21_04-daily-gen2:latest' --size 
Standard_F8s_v2 --admin-username ubuntu --ssh-key-value "$SSH_KEY" 
--accelerated-networking
  ```

  After a moment, try to SSH: if you succeed, delete and recreate a new
  VM.

  [troubleshooting]

  To be able to connect into the VM, run:

  az vm run-command invoke -g "$GROUP" -n "$VM_NAME" --command-id 
RunShellScript --scripts "sudo -u ubuntu ssh-import-id $LP_USERNAME"
  ```

  In "/run/cloud-init/instance-data.json", I can see:
  ```
   "publicKeys": [
    {
     "keyData": "",
     "path": "/home/ubuntu/.ssh/authorized_keys"
    }
   ],
  ```

  as expected.

  [workaround]

  As mentioned, Azure allows the user to run command into the VM without
  SSH connection. To do so, one can use the Azure CLI:

  az vm run-command invoke -g "$GROUP" -n "$VM_NAME" --command-id
  RunShellScript --scripts "sudo -u ubuntu ssh-import-id $LP_USERNAME"

  This example uses "ssh-import-id" but it's also possible to just echo
  a given public key into /home/ubuntu/.ssh/authorized_keys

  NOTE: this will only solves the SSH issue, I do not know if this bug
  affects other things. If so the user would have to apply those things
  manually.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1919177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1791243] Re: launch-instance-from-volume.rst is not latest version

2021-06-01 Thread Stephen Finucane
This doc needs to be reworked, but I think we should do so from scratch
rather than copying the (now very old) stuff from the manuals.

** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1791243

Title:
  launch-instance-from-volume.rst is not latest version

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  We lost some changes of doc/source/user/launch-instance-from-
  volume.rst in openstack-manuals after ocata

  We need upload the latest doc from manuals repo [1], and
  merge all later changes[2][3][4] into this doc.

  [1] I4a556b6a596a28c0350c7411c147459c3f06d084
  [2] Ifa2e2bbb4c5f51f13d1a5832bd7dbf9f690fcad7
  [3] Ida4cf70a7e53fd37ceeadb5629e3221072219689
  [4] Ifb99e727110c4904a85bc4a13366c2cae300b8df

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1791243/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930448] [NEW] 'VolumeNotFound' exception is not handled

2021-06-01 Thread Stephen Finucane
Public bug reported:

Attempting to attach a volume using an invalid ID currently results in a
HTTP 500 error. This error should be handled and a HTTP 4xx error
returned instead.

  $ openstack server create ... \
 --block-device 
source_type=volume,uuid=44d317a3-6183-4063-868b-aa0728576f5f,destination_type=volume,delete_on_termination=true
 \
 --wait test-server
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-7fe03627-c4ce-4f4b-9d5c-3abd6b88d3e3)

where '44d317a3-6183-4063-868b-aa0728576f5f' is not an UUID
corresponding to a valid volume.

A full traceback from nova-compute is provided below.

  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi [None req-7fe03627-c4ce-4f4b-9d5c-3abd6b88d3e3 demo 
admin] Unexpected exception in API method: nova.exception.VolumeNotFound: 
Volume 44d317a3-6183-4063-868b-aa0728576f5f could not be found.
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi Traceback (most recent call last):
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi   File "/opt/stack/nova/nova/volume/cinder.py", line 
432, in wrapper
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi res = method(self, ctx, volume_id, *args, **kwargs)
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi   File "/opt/stack/nova/nova/volume/cinder.py", line 
498, in get
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi item = cinderclient(
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi   File 
"/usr/local/lib/python3.8/dist-packages/cinderclient/v2/volumes.py", line 281, 
in get
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi return self._get("/volumes/%s" % volume_id, 
"volume")
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi   File 
"/usr/local/lib/python3.8/dist-packages/cinderclient/base.py", line 293, in _get
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi resp, body = self.api.client.get(url)
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi   File 
"/usr/local/lib/python3.8/dist-packages/cinderclient/client.py", line 215, in 
get
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi return self._cs_request(url, 'GET', **kwargs)
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi   File 
"/usr/local/lib/python3.8/dist-packages/cinderclient/client.py", line 206, in 
_cs_request
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi return self.request(url, method, **kwargs)
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi   File 
"/usr/local/lib/python3.8/dist-packages/cinderclient/client.py", line 192, in 
request
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi raise exceptions.from_response(resp, body)
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi cinderclient.exceptions.NotFound: Volume 
44d317a3-6183-4063-868b-aa0728576f5f could not be found. (HTTP 404) 
(Request-ID: req-9fcc1abe-1212-45f5-ac76-60fdecd22506)
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi 
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi During handling of the above exception, another 
exception occurred:
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi Traceback (most recent call last):
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi   File "/opt/stack/nova/nova/api/openstack/wsgi.py", 
line 658, in wrapped
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi return f(*args, **kwargs)
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 110, in wrapper
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi return func(*args, **kwargs)
  Jun 01 15:05:14 devstack-ubuntu2004 devstack@n-api.service[1658]: ERROR 
nova.api.openstack.wsgi   File 
"/opt/stack/nova/nova/api/validation/__init__.py", line 

[Yahoo-eng-team] [Bug 1930443] [NEW] [LB] Linux Bridge agent always loads trunk extension, regardless of the loaded service plugins

2021-06-01 Thread Rodolfo Alonso
Public bug reported:

Linux Bridge agent always loads trunk extension, regardless of the
loaded service plugins. The trunk agent extension execute unnecessary
operations that slows down the agent execution.

For example:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ea7/792505/3/check
/neutron-fullstack-with-uwsgi_6/ea75d31/controller/logs/dsvm-fullstack-
logs/TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart_LB
%2CFlat-network_/neutron-linuxbridge-agent--2021-05-25--
08-56-07-279444_log.txt

A new port is added. The trunk agent extension receives those events a executes 
the port processing:
2021-05-25 08:56:17.731 107232 DEBUG neutron_lib.callbacks.manager 
[req-58c1c8d4-c2a9-431c-a2a0-46f80be0e835 - - - - -] Notify callbacks 
['neutron.services.trunk.drivers.linuxbridge.agent.driver.LinuxBridgeTrunkDriver.agent_port_change-145314']
 for port_device, after_update _notify_loop 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/neutron_lib/callbacks/manager.py:192

...

2021-05-25 08:56:17.788 107232 DEBUG oslo_concurrency.lockutils [req-
58c1c8d4-c2a9-431c-a2a0-46f80be0e835 - - - - -] Acquired lock "trunk-
tapdd335ce5-14" lock /home/zuul/src/opendev.org/openstack/neutron/.tox
/dsvm-fullstack-gate/lib/python3.8/site-
packages/oslo_concurrency/lockutils.py:266


This lock is released at:
2021-05-25 08:56:17.829 107232 DEBUG oslo_concurrency.lockutils 
[req-58c1c8d4-c2a9-431c-a2a0-46f80be0e835 - - - - -] Releasing lock 
"trunk-tapdd335ce5-14" lock 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930443

Title:
  [LB] Linux Bridge agent always loads trunk extension, regardless of
  the loaded service plugins

Status in neutron:
  New

Bug description:
  Linux Bridge agent always loads trunk extension, regardless of the
  loaded service plugins. The trunk agent extension execute unnecessary
  operations that slows down the agent execution.

  For example:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ea7/792505/3/check
  /neutron-fullstack-with-uwsgi_6/ea75d31/controller/logs/dsvm-
  fullstack-
  logs/TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart_LB
  %2CFlat-network_/neutron-linuxbridge-agent--2021-05-25--
  08-56-07-279444_log.txt

  A new port is added. The trunk agent extension receives those events a 
executes the port processing:
  2021-05-25 08:56:17.731 107232 DEBUG neutron_lib.callbacks.manager 
[req-58c1c8d4-c2a9-431c-a2a0-46f80be0e835 - - - - -] Notify callbacks 
['neutron.services.trunk.drivers.linuxbridge.agent.driver.LinuxBridgeTrunkDriver.agent_port_change-145314']
 for port_device, after_update _notify_loop 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/neutron_lib/callbacks/manager.py:192

  ...

  2021-05-25 08:56:17.788 107232 DEBUG oslo_concurrency.lockutils [req-
  58c1c8d4-c2a9-431c-a2a0-46f80be0e835 - - - - -] Acquired lock "trunk-
  tapdd335ce5-14" lock /home/zuul/src/opendev.org/openstack/neutron/.tox
  /dsvm-fullstack-gate/lib/python3.8/site-
  packages/oslo_concurrency/lockutils.py:266

  
  This lock is released at:
  2021-05-25 08:56:17.829 107232 DEBUG oslo_concurrency.lockutils 
[req-58c1c8d4-c2a9-431c-a2a0-46f80be0e835 - - - - -] Releasing lock 
"trunk-tapdd335ce5-14" lock 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_concurrency/lockutils.py:282

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930432] [NEW] [L2] provisioning_block should be added to Neutron internal service port? Or should not?

2021-06-01 Thread LIU Yulong
Public bug reported:

provisioning_blocks are mostly used for compute port to notify that
Neutron has done the networking settings, so VM can go power-on now. But
now, Neutron does not check the port's device owner, it adds
provisioning_block to all types of ports, even it is used by neutron.

For some internal service ports, neutron does not notify anything to
neutron itself, but the provisioning_block may cause some timeout for
resource setup.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930432

Title:
  [L2] provisioning_block should be added to Neutron internal service
  port? Or should not?

Status in neutron:
  New

Bug description:
  provisioning_blocks are mostly used for compute port to notify that
  Neutron has done the networking settings, so VM can go power-on now.
  But now, Neutron does not check the port's device owner, it adds
  provisioning_block to all types of ports, even it is used by neutron.

  For some internal service ports, neutron does not notify anything to
  neutron itself, but the provisioning_block may cause some timeout for
  resource setup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930420] [NEW] Horizon should autofill Source tab for instance when launching it from volume

2021-06-01 Thread Vadym Markov
Public bug reported:

Reproduced on recent devstack master.

Steps to reproduce:

1. Create a bootable volume from cirros image. 1 Gb is enough
2. Select created volume and choose "Launch as Instance"
3. Define instance name, allocate flavor and network in corresponding tabs
4. Launch instance

Horizon should autofill Source tab for instance correctly when launching it 
from volume. 
By default “image” option is selected, leading to error: Block Device Mapping 
is Invalid: Missing device UUID. (HTTP 400) (Request-ID: req-some-uuid) 

Expected Result:
Instance created successfully

Actual Result:
Instance creation fails with following error: Block Device Mapping is Invalid: 
Missing device UUID. (HTTP 400) (Request-ID: req-some-uuid)

** Affects: horizon
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1930420

Title:
  Horizon should autofill Source tab for instance when launching it from
  volume

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Reproduced on recent devstack master.

  Steps to reproduce:

  1. Create a bootable volume from cirros image. 1 Gb is enough
  2. Select created volume and choose "Launch as Instance"
  3. Define instance name, allocate flavor and network in corresponding tabs
  4. Launch instance

  Horizon should autofill Source tab for instance correctly when launching it 
from volume. 
  By default “image” option is selected, leading to error: Block Device Mapping 
is Invalid: Missing device UUID. (HTTP 400) (Request-ID: req-some-uuid) 

  Expected Result:
  Instance created successfully

  Actual Result:
  Instance creation fails with following error: Block Device Mapping is 
Invalid: Missing device UUID. (HTTP 400) (Request-ID: req-some-uuid)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1930420/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930405] [NEW] Horizon Integration tests Fails if network backend is OVN

2021-06-01 Thread Vishal Manchanda
Public bug reported:

Horizon integration test specifically router-related tests [1] start
failing after devstack changes default Neutron backend to OVN From OVS.
More details about migration to OVN can be found here [2].

[1] http://paste.openstack.org/show/806215/
[2] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022410.html

** Affects: horizon
 Importance: High
 Assignee: Vishal Manchanda (vishalmanchanda)
 Status: New

** Changed in: horizon
   Importance: Undecided => High

** Changed in: horizon
 Assignee: (unassigned) => Vishal Manchanda (vishalmanchanda)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1930405

Title:
  Horizon Integration tests Fails if network backend is OVN

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon integration test specifically router-related tests [1] start
  failing after devstack changes default Neutron backend to OVN From
  OVS. More details about migration to OVN can be found here [2].

  [1] http://paste.openstack.org/show/806215/
  [2] 
http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022410.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1930405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930406] [NEW] parallel volume-attachment requests might starve out nova-api for others

2021-06-01 Thread Johannes Kulik
Public bug reported:

When doing volume-attachemnts, nova-api does an RPC-call (with a
long_rpc_timeout) into nova-compute to reserve_block_device_name(). This
takes a lock on the instance. If a volume-attachment was in progress,
which also takes the instance-lock, nova-api's RPC-call needs to wait.

Having RPC-calls in nova-api, that can take a long time, will block the
process handling the request. If a project does a lot of volume-
attachments (e.g. for a k8s workload > 10 attachments per instance),
this could starve out other users of nova-api by occupying all available
processes.

When running nova-api with eventlet, a small number of processes can
handle a lot of requests in parallel and some blocking rpc-calls don't
matter too much.

When switching to uWSGI, the number of processes would have to be
increased drastically to accommodate for that - unless it's possible to
map those requests to threads and use a high number of threads instead.

What's the recommended way to run nova-api on uWSGI to handle this? Low
number of processes with high number of threads to mimic eventlet?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1930406

Title:
  parallel volume-attachment requests might starve out nova-api for
  others

Status in OpenStack Compute (nova):
  New

Bug description:
  When doing volume-attachemnts, nova-api does an RPC-call (with a
  long_rpc_timeout) into nova-compute to reserve_block_device_name().
  This takes a lock on the instance. If a volume-attachment was in
  progress, which also takes the instance-lock, nova-api's RPC-call
  needs to wait.

  Having RPC-calls in nova-api, that can take a long time, will block
  the process handling the request. If a project does a lot of volume-
  attachments (e.g. for a k8s workload > 10 attachments per instance),
  this could starve out other users of nova-api by occupying all
  available processes.

  When running nova-api with eventlet, a small number of processes can
  handle a lot of requests in parallel and some blocking rpc-calls don't
  matter too much.

  When switching to uWSGI, the number of processes would have to be
  increased drastically to accommodate for that - unless it's possible
  to map those requests to threads and use a high number of threads
  instead.

  What's the recommended way to run nova-api on uWSGI to handle this?
  Low number of processes with high number of threads to mimic eventlet?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1930406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930402] [NEW] SSH timeouts happens very often in the ovn based CI jobs

2021-06-01 Thread Slawek Kaplonski
Public bug reported:

I saw those errors mostly in the neutron-ovn-tempest-slow but probably
it happens also in other jobs. Example of failures:

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0de/791365/7/check/neutron-ovn-tempest-slow/0de1c30/testr_results.html
https://2f7fb53980d59c550f7f-e09de525732b656a1c483807eeb06fc8.ssl.cf2.rackcdn.com/793369/3/check/neutron-ovn-tempest-slow/8a116c7/testr_results.html
https://f86d217b949ada82d82c-f669355b5e0e599ce4f84e6e473a124c.ssl.cf2.rackcdn.com/791365/6/check/neutron-ovn-tempest-slow/5dc8d92/testr_results.html

In all those cases, common thing is that VMs seems to get IP address
from dhcp properly, cloud-init seems to be working fine but SSH to the
FIP is not possible.

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930402

Title:
  SSH timeouts happens very often in the ovn based CI jobs

Status in neutron:
  Confirmed

Bug description:
  I saw those errors mostly in the neutron-ovn-tempest-slow but probably
  it happens also in other jobs. Example of failures:

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0de/791365/7/check/neutron-ovn-tempest-slow/0de1c30/testr_results.html
  
https://2f7fb53980d59c550f7f-e09de525732b656a1c483807eeb06fc8.ssl.cf2.rackcdn.com/793369/3/check/neutron-ovn-tempest-slow/8a116c7/testr_results.html
  
https://f86d217b949ada82d82c-f669355b5e0e599ce4f84e6e473a124c.ssl.cf2.rackcdn.com/791365/6/check/neutron-ovn-tempest-slow/5dc8d92/testr_results.html

  In all those cases, common thing is that VMs seems to get IP address
  from dhcp properly, cloud-init seems to be working fine but SSH to the
  FIP is not possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930401] [NEW] Fullstack l3 agent tests failing due to timeout waiting until port is active

2021-06-01 Thread Slawek Kaplonski
Public bug reported:

Many fullstack L3 agent related tests are failing recently and the
common thing for many of them is the fact that they are failing while
waiting until port status will be ACTIVE. Like e.g.:

https://9cec50bd524f94a2df4c-c6273b9a7cf594e42eb2c4e7f818.ssl.cf5.rackcdn.com/791365/6/check/neutron-fullstack-with-uwsgi/6fc0704/testr_results.html
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_73b/793141/2/check/neutron-fullstack-with-uwsgi/73b08ae/testr_results.html
https://b87ba208d44b7f1356ad-f27c11edabee52a7804784593cf2712d.ssl.cf5.rackcdn.com/791365/5/check/neutron-fullstack-with-uwsgi/634ccb1/testr_results.html
https://dd43e0f9601da5e2e650-51b18fcc89837fbadd0245724df9c686.ssl.cf1.rackcdn.com/791365/6/check/neutron-fullstack-with-uwsgi/5413cd9/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_8d0/791365/5/check/neutron-fullstack-with-uwsgi/8d024fb/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_188/791365/5/check/neutron-fullstack-with-uwsgi/188aa48/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9a3/792998/2/check/neutron-fullstack-with-uwsgi/9a3b5a2/testr_results.html

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: fullstack l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930401

Title:
  Fullstack l3 agent tests failing due to timeout waiting until port is
  active

Status in neutron:
  Confirmed

Bug description:
  Many fullstack L3 agent related tests are failing recently and the
  common thing for many of them is the fact that they are failing while
  waiting until port status will be ACTIVE. Like e.g.:

  
https://9cec50bd524f94a2df4c-c6273b9a7cf594e42eb2c4e7f818.ssl.cf5.rackcdn.com/791365/6/check/neutron-fullstack-with-uwsgi/6fc0704/testr_results.html
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_73b/793141/2/check/neutron-fullstack-with-uwsgi/73b08ae/testr_results.html
  
https://b87ba208d44b7f1356ad-f27c11edabee52a7804784593cf2712d.ssl.cf5.rackcdn.com/791365/5/check/neutron-fullstack-with-uwsgi/634ccb1/testr_results.html
  
https://dd43e0f9601da5e2e650-51b18fcc89837fbadd0245724df9c686.ssl.cf1.rackcdn.com/791365/6/check/neutron-fullstack-with-uwsgi/5413cd9/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_8d0/791365/5/check/neutron-fullstack-with-uwsgi/8d024fb/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_188/791365/5/check/neutron-fullstack-with-uwsgi/188aa48/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9a3/792998/2/check/neutron-fullstack-with-uwsgi/9a3b5a2/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930397] [NEW] neutron-lib from master branch is breaking our UT job

2021-06-01 Thread Slawek Kaplonski
Public bug reported:

It's for now only in the periodic queue:

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9e8/periodic/opendev.org/openstack/neutron/master
/openstack-tox-py36-with-neutron-lib-master/9e852a4/testr_results.html

but we need to fix it before we will release and use new neutron-lib.

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930397

Title:
  neutron-lib from master branch is breaking our UT job

Status in neutron:
  Confirmed

Bug description:
  It's for now only in the periodic queue:

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9e8/periodic/opendev.org/openstack/neutron/master
  /openstack-tox-py36-with-neutron-lib-master/9e852a4/testr_results.html

  but we need to fix it before we will release and use new neutron-lib.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930283] Re: PUT /v2.0/qos/policies/{policy_id}/minimum_bandwidth_rules/{rule_id} returns HTTP 501 which is undocumented in the API ref

2021-06-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-lib/+/793802
Committed: 
https://opendev.org/openstack/neutron-lib/commit/c3f6029f987a22e4ed5f2ac46618b6bf435a6f9d
Submitter: "Zuul (22348)"
Branch:master

commit c3f6029f987a22e4ed5f2ac46618b6bf435a6f9d
Author: Balazs Gibizer 
Date:   Mon May 31 16:29:09 2021 +0200

[api-ref] Add HTTP 501 for PUT min bw QoS rule

The change I477edb0ae35b385ac776a58195f22382e2fce4ed introduced a new
possible response code for PUT
/v2.0/qos/policies/{policy_id}/minimum_bandwidth_rules/{rule_id} API
but it missed to document it in the API ref. This patch amends the
documentation accordingly.

Closes-Bug: #1930283
Change-Id: I848e919d0a2e2c439aed7cd6e9ec0ab5102a43df


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930283

Title:
  PUT /v2.0/qos/policies/{policy_id}/minimum_bandwidth_rules/{rule_id}
  returns HTTP 501 which is undocumented in the API ref

Status in neutron:
  Fix Released

Bug description:
  The patch https://review.opendev.org/c/openstack/neutron/+/636970
  introduced a check for qos min bw policy update to reject such update
  if the port is bound to VM. This check can results in HTTP 501  which
  is not documented in the API ref.

  
  ---
  Release:  on 2021-05-27 10:03:52
  SHA: aef3e78def5f96fee226ed8f9602ae90577a4228
  Source: 
https://opendev.org/openstack/neutron-lib/src/api-ref/source/v2/index.rst
  URL: 
https://docs.openstack.org/api-ref/network/v2/?expanded=update-minimum-bandwidth-rule-detail

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930367] [NEW] "TestNeutronServer" related tests failing frequently

2021-06-01 Thread Rodolfo Alonso
Public bug reported:

"TestNeutronServer" and those inheriting from this parent class
("TestWsgiServer", "TestRPCServer", "TestPluginWorker") are failing
frequently.

Error:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d0d/791365/7/gate
/neutron-functional-with-uwsgi/d0d293b/testr_results.html

Snippet: http://paste.openstack.org/show/806208/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930367

Title:
  "TestNeutronServer" related tests failing frequently

Status in neutron:
  New

Bug description:
  "TestNeutronServer" and those inheriting from this parent class
  ("TestWsgiServer", "TestRPCServer", "TestPluginWorker") are failing
  frequently.

  Error:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d0d/791365/7/gate
  /neutron-functional-with-uwsgi/d0d293b/testr_results.html

  Snippet: http://paste.openstack.org/show/806208/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp