[Yahoo-eng-team] [Bug 1990257] Re: [OpenStack Yoga] Creating a VM is failed when stops only one rabbitmq

2022-11-15 Thread Son Do Xuan
** Also affects: rabbitmq
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1990257

Title:
  [OpenStack Yoga] Creating a VM is failed when stops only one rabbitmq

Status in Cinder:
  New
Status in kolla:
  New
Status in OpenStack Compute (nova):
  New
Status in RabbitMQ:
  New

Bug description:
  Hi, I deploy a new OpenStack cluster (OpenStack Yoga) by kolla-ansible. 
Everything works fine.
  Then, I try to stop only one rabbitmq-server in cluster; after that, I can't 
create a new VM.

  Reproduce:
  - Deploy a new openstack cluster yoga by kolla-ansible
  - Stop random rabbitmq on one node (docker stop rabbitmq)
  - Test create new server

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1990257/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996593] [NEW] [OpenStack-OVN] Poor network performance

2022-11-15 Thread Son Do Xuan
Public bug reported:

Hi, I deploy a new OpenStack cluster OpenStack (version Yoga, Victoria), 
neutron using "Networking-ovn".
Then, I create 2 VM (VM is installed by iperf3 tool) on the same network, on 
the same compute node. I test bandwidth between 2 VM, and the result is:
- Bandwidth is 7 Gbps when the port of the VM has a Security Group.
- Bandwidth is 18 Gbps Gbps when the port of the VM doesn't have a Security 
Group.

Meanwhile, I deploy a new OpenStack cluster OpenStack (version Yoga, Victoria), 
neutron use "OpenvSwitch".
Then, I create 2 VM (VM is installed by iperf3 tool) on the same network, on 
the same compute node. I test bandwidth between 2 VM, and the result is:
- Bandwidth is 18 Gbps when the port of the VM has a Security Group.
- Bandwidth is 18 Gbps Gbps when the port of the VM doesn't have a Security 
Group.

---> I noticed that the bandwidth between 2 VM (the port of the VM has a
Security Group) is very low when deploying Neutron using Networking-OVN

** Affects: networking-ovn
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovn

** Also affects: networking-ovn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996593

Title:
  [OpenStack-OVN] Poor network performance

Status in networking-ovn:
  New
Status in neutron:
  New

Bug description:
  Hi, I deploy a new OpenStack cluster OpenStack (version Yoga, Victoria), 
neutron using "Networking-ovn".
  Then, I create 2 VM (VM is installed by iperf3 tool) on the same network, on 
the same compute node. I test bandwidth between 2 VM, and the result is:
  - Bandwidth is 7 Gbps when the port of the VM has a Security Group.
  - Bandwidth is 18 Gbps Gbps when the port of the VM doesn't have a Security 
Group.

  Meanwhile, I deploy a new OpenStack cluster OpenStack (version Yoga, 
Victoria), neutron use "OpenvSwitch".
  Then, I create 2 VM (VM is installed by iperf3 tool) on the same network, on 
the same compute node. I test bandwidth between 2 VM, and the result is:
  - Bandwidth is 18 Gbps when the port of the VM has a Security Group.
  - Bandwidth is 18 Gbps Gbps when the port of the VM doesn't have a Security 
Group.

  ---> I noticed that the bandwidth between 2 VM (the port of the VM has
  a Security Group) is very low when deploying Neutron using Networking-
  OVN

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1996593/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996594] [NEW] OVN metadata randomly stops working

2022-11-15 Thread Hua Zhang
Public bug reported:

We found that OVN metadata will not work randomly when OVN is writing a
snapshot.

1, At 12:30:35, OVN started to transfer leadership to write a snapshot

$ find sosreport-juju-2752e1-*/var/log/ovn/* |xargs zgrep -i -E 'Transferring 
leadership'
sosreport-juju-2752e1-6-lxd-24-xxx-2022-08-18-entowko/var/log/ovn/ovsdb-server-sb.log:2022-08-18T12:30:35.322Z|80962|raft|INFO|Transferring
 leadership to write a snapshot.
sosreport-juju-2752e1-6-lxd-24-xxx-2022-08-18-entowko/var/log/ovn/ovsdb-server-sb.log:2022-08-18T17:52:53.024Z|82382|raft|INFO|Transferring
 leadership to write a snapshot.
sosreport-juju-2752e1-7-lxd-27-xxx-2022-08-18-hhxxqci/var/log/ovn/ovsdb-server-sb.log:2022-08-18T12:30:35.330Z|92698|raft|INFO|Transferring
 leadership to write a snapshot.

2, At 12:30:36, neutron-ovn-metadata-agent reported OVSDB Error

$ find sosreport-srv1*/var/log/neutron/* |xargs zgrep -i -E 'OVSDB Error'
sosreport-srv1xxx2d-xxx-2022-08-18-cuvkufw/var/log/neutron/neutron-ovn-metadata-agent.log:2022-08-18
 12:30:36.103 75556 ERROR ovsdbapp.backend.ovs_idl.transaction [-] OVSDB Error: 
no error details available
sosreport-srv1xxx6d-xxx-2022-08-18-bgnovqu/var/log/neutron/neutron-ovn-metadata-agent.log:2022-08-18
 12:30:36.104 2171 ERROR ovsdbapp.backend.ovs_idl.transaction [-] OVSDB Error: 
no error details available

3, At 12:57:53, we saw the error 'No port found in network', then we
will hit the problem that OVN metadata does not work randomly

2022-08-18 12:57:53.800 3730 ERROR neutron.agent.ovn.metadata.server [-]
No port found in network 63e2c276-60dd-40e3-baa1-c16342eacce2 with IP
address 100.94.98.135

After the problem occurs, restarting neutron-ovn-metadata-agent or
restarting haproxy instance as follows can be used as a workaround.

/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec
ovnmeta-63e2c276-60dd-40e3-baa1-c16342eacce2 haproxy -f
/var/lib/neutron/ovn-metadata-
proxy/63e2c276-60dd-40e3-baa1-c16342eacce2.conf

One lp bug #1990978 [1] is trying to reducing the frequency of transfers, it 
should be beneficial to this problem.
But it only reduces the occurrence of problems, not completely avoiding them. I 
wonder if we need to add some retry logic on the neutron side

NOTE: The openstack version we are using is focal-xena, and
openvswitch's version is 2.16.0-0ubuntu2.1~cloud0

[1] https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1990978

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996594

Title:
  OVN metadata randomly stops working

Status in neutron:
  New

Bug description:
  We found that OVN metadata will not work randomly when OVN is writing
  a snapshot.

  1, At 12:30:35, OVN started to transfer leadership to write a snapshot

  $ find sosreport-juju-2752e1-*/var/log/ovn/* |xargs zgrep -i -E 'Transferring 
leadership'
  
sosreport-juju-2752e1-6-lxd-24-xxx-2022-08-18-entowko/var/log/ovn/ovsdb-server-sb.log:2022-08-18T12:30:35.322Z|80962|raft|INFO|Transferring
 leadership to write a snapshot.
  
sosreport-juju-2752e1-6-lxd-24-xxx-2022-08-18-entowko/var/log/ovn/ovsdb-server-sb.log:2022-08-18T17:52:53.024Z|82382|raft|INFO|Transferring
 leadership to write a snapshot.
  
sosreport-juju-2752e1-7-lxd-27-xxx-2022-08-18-hhxxqci/var/log/ovn/ovsdb-server-sb.log:2022-08-18T12:30:35.330Z|92698|raft|INFO|Transferring
 leadership to write a snapshot.

  2, At 12:30:36, neutron-ovn-metadata-agent reported OVSDB Error

  $ find sosreport-srv1*/var/log/neutron/* |xargs zgrep -i -E 'OVSDB Error'
  
sosreport-srv1xxx2d-xxx-2022-08-18-cuvkufw/var/log/neutron/neutron-ovn-metadata-agent.log:2022-08-18
 12:30:36.103 75556 ERROR ovsdbapp.backend.ovs_idl.transaction [-] OVSDB Error: 
no error details available
  
sosreport-srv1xxx6d-xxx-2022-08-18-bgnovqu/var/log/neutron/neutron-ovn-metadata-agent.log:2022-08-18
 12:30:36.104 2171 ERROR ovsdbapp.backend.ovs_idl.transaction [-] OVSDB Error: 
no error details available

  3, At 12:57:53, we saw the error 'No port found in network', then we
  will hit the problem that OVN metadata does not work randomly

  2022-08-18 12:57:53.800 3730 ERROR neutron.agent.ovn.metadata.server
  [-] No port found in network 63e2c276-60dd-40e3-baa1-c16342eacce2 with
  IP address 100.94.98.135

  After the problem occurs, restarting neutron-ovn-metadata-agent or
  restarting haproxy instance as follows can be used as a workaround.

  /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec
  ovnmeta-63e2c276-60dd-40e3-baa1-c16342eacce2 haproxy -f
  /var/lib/neutron/ovn-metadata-
  proxy/63e2c276-60dd-40e3-baa1-c16342eacce2.conf

  One lp bug #1990978 [1] is trying to reducing the frequency of transfers, it 
should be beneficial to this problem.
  But it only reduces the occurrence of problems, not completely avoiding them. 
I wonder if we need to add some retry logic

[Yahoo-eng-team] [Bug 1996600] [NEW] Federated users with Admin role in a domain cannot see or manage project quotas

2022-11-15 Thread Diko Parvanov
Public bug reported:

We have federated domain and users added via saml melon. Adding a
federated user either the Admin or Admin --inherited role to a domain
doesn't show the option to modify quotas, however visiting manually the
URL https://horizon/identity//update_quotas/ I can NOT
see the current ones (blank input fields) but I can populate them and on
Save they all modify (verified by the openstack cli with admin user)

This looks like an UI bug for federated users, as this does not happen
with native keystone users.

Installed packages:

ii  python3-django-horizon   3:18.3.5-0ubuntu2  
 all  Django module providing web based interaction with OpenStack 
(Python 3)
ii  heat-dashboard-common3.0.1-0ubuntu1 
 all  OpenStack orchestration service - Common files
ii  openstack-dashboard  3:18.3.5-0ubuntu2  
 all  Django web interface for OpenStack
ii  openstack-dashboard-common   3:18.3.5-0ubuntu2  
 all  Django web interface for OpenStack - common files
ii  openstack-dashboard-ubuntu-theme 3:18.3.5-0ubuntu2  
 all  Transitional dummy package for Ubuntu theme for Horizon
ii  python3-designate-dashboard  10.0.0-0ubuntu0.20.04.1
 all  OpenStack DNS as a Service - Python 3 dashboard plugin
ii  python3-heat-dashboard   3.0.1-0ubuntu1 
 all  OpenStack orchestration service - Python 3 dashboard plugin
ii  python3-neutron-fwaas-dashboard  1:3.0.0-0ubuntu0.20.04.1   
 all  OpenStack Firewall as a Service - dashboard plugin
ii  python3-octavia-dashboard5.0.0-0ubuntu0.20.04.1 
 all  OpenStack Load Balance as a service - dashboard plugin

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1996600

Title:
  Federated users with Admin role in a domain cannot see or manage
  project quotas

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We have federated domain and users added via saml melon. Adding a
  federated user either the Admin or Admin --inherited role to a domain
  doesn't show the option to modify quotas, however visiting manually
  the URL https://horizon/identity//update_quotas/ I can
  NOT see the current ones (blank input fields) but I can populate them
  and on Save they all modify (verified by the openstack cli with admin
  user)

  This looks like an UI bug for federated users, as this does not happen
  with native keystone users.

  Installed packages:

  ii  python3-django-horizon   3:18.3.5-0ubuntu2
   all  Django module providing web based interaction with 
OpenStack (Python 3)
  ii  heat-dashboard-common3.0.1-0ubuntu1   
   all  OpenStack orchestration service - Common files
  ii  openstack-dashboard  3:18.3.5-0ubuntu2
   all  Django web interface for OpenStack
  ii  openstack-dashboard-common   3:18.3.5-0ubuntu2
   all  Django web interface for OpenStack - common files
  ii  openstack-dashboard-ubuntu-theme 3:18.3.5-0ubuntu2
   all  Transitional dummy package for Ubuntu theme for Horizon
  ii  python3-designate-dashboard  10.0.0-0ubuntu0.20.04.1  
   all  OpenStack DNS as a Service - Python 3 dashboard plugin
  ii  python3-heat-dashboard   3.0.1-0ubuntu1   
   all  OpenStack orchestration service - Python 3 dashboard plugin
  ii  python3-neutron-fwaas-dashboard  1:3.0.0-0ubuntu0.20.04.1 
   all  OpenStack Firewall as a Service - dashboard plugin
  ii  python3-octavia-dashboard5.0.0-0ubuntu0.20.04.1   
   all  OpenStack Load Balance as a service - dashboard plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1996600/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1995078] Re: OVN: HA chassis group priority is different than gateway chassis priority

2022-11-15 Thread Bartosz Bezak
** Also affects: kolla-ansible
   Importance: Undecided
   Status: New

** Also affects: kolla-ansible/yoga
   Importance: Undecided
   Status: New

** Also affects: kolla-ansible/wallaby
   Importance: Undecided
   Status: New

** Also affects: kolla-ansible/zed
   Importance: Undecided
   Status: New

** Also affects: kolla-ansible/xena
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1995078

Title:
  OVN: HA chassis group priority is different than gateway chassis
  priority

Status in kolla-ansible:
  New
Status in kolla-ansible wallaby series:
  New
Status in kolla-ansible xena series:
  New
Status in kolla-ansible yoga series:
  New
Status in kolla-ansible zed series:
  New
Status in neutron:
  Confirmed

Bug description:
  OpenStack release affected - Wallaby, Xena and Yoga for sure
  OVN version: 21.12 (from CentOS NFV SIG repos)
  Host OS: CentOS Stream 8

  Neutron creates External ports for bare metal instances and uses 
ha_chassis_group.
  Neutron normally defines a different priority for Routers LRP gateway chassis 
and ha_chassis_group.

  I have a router with two VLANs attached - external (used for internet
  connectivity - SNAT or DNAT/Floating IP) and internal VLAN network
  hosting bare metal servers (and some Geneve networks for VMs).

  If an External port’s HA chassis group active chassis is different
  than gateway chassis (external vlan network) active chassis - those
  bare metal servers have intermittent network connectivity for any
  traffic going through that router.

  In a thread on ovs-discuss ML - Numan Siddique wrote that "it is recommended 
that the
  same controller which is actively handling the gateway traffic also
  handles the external ports"

  More information in this thread -
  https://mail.openvswitch.org/pipermail/ovs-
  discuss/2022-October/052067.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1995078/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996606] [NEW] QoS rules policies do not work for "owners"

2022-11-15 Thread Rodolfo Alonso
Public bug reported:

Related bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141470

Policies for QoS rules do not work for "owner" since QoS rules do not
have a project ID. When the default policy is overridden, the policy
enforcement raise an exception. For example:

  update_policy_bandwidth_limit_rule":"rule:admin_or_owner"

When the policy engine tries to check the owner, it first check the
project_id of the object. In this case, the QoS rule does NOT have a
project ID (e.g.: max-bw rule definition [1]).

This is the exception the engine returns: [2].

[1]https://github.com/openstack/neutron/blob/320f54eba1a82917e4f02244ea8ddf9757d8f39f/neutron/db/qos/models.py#L145-L166
[2]https://paste.opendev.org/show/bEPQCngI8QpmWIVGoiAi/

** Affects: neutron
 Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996606

Title:
  QoS rules policies do not work for "owners"

Status in neutron:
  New

Bug description:
  Related bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2141470

  Policies for QoS rules do not work for "owner" since QoS rules do not
  have a project ID. When the default policy is overridden, the policy
  enforcement raise an exception. For example:

update_policy_bandwidth_limit_rule":"rule:admin_or_owner"

  When the policy engine tries to check the owner, it first check the
  project_id of the object. In this case, the QoS rule does NOT have a
  project ID (e.g.: max-bw rule definition [1]).

  This is the exception the engine returns: [2].

  
[1]https://github.com/openstack/neutron/blob/320f54eba1a82917e4f02244ea8ddf9757d8f39f/neutron/db/qos/models.py#L145-L166
  [2]https://paste.opendev.org/show/bEPQCngI8QpmWIVGoiAi/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1996606/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996421] Re: 'openstack port list' should display ports only from current project

2022-11-15 Thread Rodolfo Alonso
Hello:

Neutron currently doesn't provide a RBAC functionality on ports. As
commented, the port is listed depending on (1) the network RBAC policies
and (2) the policy rules.

When a network is created by project A, the user of this project is
able, by default, to see all ports belonging to this network. If this
user shared via RBAC this network with project B, this other project
will be able to create ports on this network.

What we have here is the following:
* Project A user will be able to list all ports in the network because:
** The project owns the network
** By default, the "get_port" policy includes 
"rule:admin_owner_or_network_owner". That means all ports belonging to this 
network, regardless of the owner, will be shown.

* Project B user will be able to list all ports in this network
**created by this project**. Project B user won't be able to list
Project A ports (owner of the network).


The first case is what we have in this bug. This is the expected correct 
behaviour of Neutron.

Regarding to the interaction with other projects, as in this case Nova,
this is a known issue that is also affecting for example the security
groups retrieval. In this case Nova always rejects a port that doesn't
belong to the same project ID executing the request. With the correct
policies ('update_port:device_owner', 'update_port:binding:host_id' and
'update_port:binding:profile'), Nova should be able to bind a port. As
commented before, this Nova check is something that needs to be
discussed. Nova should be able, with the correct checks, to use ports
belonging to other projects; but this RFE is out of scope in this bug.

If in your case you want to exclude the ports created by other projects in the 
"port list" command, you can use the correct Neutron policies. For example:
  "get_port": "rule:admin_or_owner"

I'll set the status of this bug to "Opinion" unless more information is
provided.

Regards.


** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996421

Title:
  'openstack port list' should display ports only from current project

Status in neutron:
  Opinion

Bug description:
  When a network is shared between multiple projects, "openstack port
  list" command shows ports from all projects which have access to that
  network. This is a problem because each port actually has a
  “project_id“ property, and the port cannot be used for any instance
  outside of that project. When a user attempts to start an instance
  with a port from a different project, it fails like this:
  nova.exception.PortNotUsable

  Steps to reproduce in horizon :-
  ===

  1. create network and share network between 2 projects
  2. from Project A, manually create a port “Test Port“ on the network
     note that the port will have the project_id for Project A
  3. from Project B, open the Launch Instance workflow navigate to
     “Network Ports”
  4. At this point, you will see “Test Port” in the list. If you use it
     for the instance from Project B, the instance will fail

  Currently, User can use --project-id="" as option to "openstack
  port list" command to get desired result. But this needs to be taken
  care at every neutron client e.g. nova or manila or openstackclient or
  horizon.

  Instead, ff we modify neutron itself to return only ports belonging to
  current project in 'openstack port list' command response (without
  specifying --project-id) (at least for non-admin users), it would be
  good improvement.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1996421/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1994635] Re: [CI][tempest] Error in "test_multiple_create_with_reservation_return"

2022-11-15 Thread Rodolfo Alonso
This error was not seen during the last weeks. I'm closing the bug as
Invalid.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1994635

Title:
  [CI][tempest] Error in "test_multiple_create_with_reservation_return"

Status in neutron:
  Invalid

Bug description:
  Logs:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_5bc/853779/7/gate/neutron-
  tempest-plugin-ovn/5bc81b5/testr_results.html

  Error: https://paste.opendev.org/show/bakPsVsXCYmg8v987o14/

  Investigation:
  The neutron server creates the port and sets it to UP. It also sends the 
"vif-plugged-event" [1], that is received by the Nova server.

  So far, I don't see where in this flow Neutron could be failing and
  why Nova declares the VIF is not created.

  I'll keep this bug open just in case we see another occurrence.

  [1]https://paste.opendev.org/show/bqJEdXei7lFP1bNfBW0H/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1994635/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996528] Re: No output for "openstack port list --project project_name" in case of non-admin user

2022-11-15 Thread Bernard Cafarelli
After checking on IRC [0], this is working as designed on the keystone side, 
regular users aren't allowed to list projects
As this is the way used to find the project ID, this is why non-admin users get 
an empty list

[0] https://meetings.opendev.org/irclogs/%23openstack-
neutron/%23openstack-neutron.2022-11-15.log.html#t2022-11-15T14:53:04

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996528

Title:
  No output for "openstack port list  --project project_name" in case of
  non-admin user

Status in neutron:
  Won't Fix

Bug description:
  Bug
  
  openstack port list --project project_id command works for both admin and 
non-admin users.
  openstack port list --project project_name command works for only admin users.

  
  Expected behavior
  ==
  openstack port list --project project_name command should work for both admin 
and non-admin users.

  Steps to reproduce
  ===
  1. source openrc admin admin
  2. openstack port list --project [this works]
  3, source openrc demo demo
  4. openstack port list --project   [this works]
  5. openstack port list --project   [No output]

  On running with --debug flag, seems like non-admin(i.e. demo) users
  don't have authorization to list projects and so name resolution from
  project_name to project_id fails. The query forwarded to neutron with
  project_name instead of project_id. The neutron then filters DB using
  {project_id: project_name} and query returns empty result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1996528/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996622] [NEW] Cannot mount old encrypted volume to an instance with Invalid password, cannot unlock any keyslot

2022-11-15 Thread Jan Wasilewski
Public bug reported:

Description
===
After an upgrade of barbican from ussuri to yoga version there is no 
possibility to attach encrypted volumes created before an upgrade to any 
instance, because of an error: "libvirt.libvirtError: internal error: unable to 
execute QEMU command 'blockdev-add': Invalid password, cannot unlock any 
keyslot". Encrypted volumes created after an upgrade are able to attach to 
instances, without such error.

Steps to reproduce
==
1. Have already created encrypted volume
2. Execute command:
openstack server add volume my-new-instance my-old-encrypted-volume
3. Check attachments details by:
openstack server show my-new-instance

Expected result
===
my-old-encrypted-volume visible in volumes_attached list. Inside VM OS newly 
attached drive should be visible

Actual result
=
my-old-encrypted-volume is not visible in volumes_attached list. During 
attachment I'm able to see such errors in nova-compute logs: 
https://paste.openstack.org/show/bNbPOHiQJOq8OsKZ5Gn2/
Barbican logs or cinder logs are not saying anything wrong. What is more, I can 
correctly retrieve a payload of a key from barbican and secret, which is used 
for keeping passphrase for a my-old-encrypted-volume, by command:
barbican secret get --payload_content_type application/octet-stream 
secret-id-and-href --file my_symmetric_key.key

The same procedure, executed for a freshly created volume is working
fine - new encrypted disk is visible inside instance OS.

Environment
===
1. Exact version of OpenStack you are running. See the following
# dpkg -l | grep nova
ii  nova-api   2:21.2.4-0ubuntu1
all  OpenStack Compute - API frontend
ii  nova-common2:21.2.4-0ubuntu1
all  OpenStack Compute - common files
ii  nova-conductor 2:21.2.4-0ubuntu1
all  OpenStack Compute - conductor service
ii  nova-novncproxy2:21.2.4-0ubuntu1
all  OpenStack Compute - NoVNC proxy
ii  nova-scheduler 2:21.2.4-0ubuntu1
all  OpenStack Compute - virtual machine scheduler
ii  python3-nova   2:21.2.4-0ubuntu1
all  OpenStack Compute Python 3 libraries
ii  python3-novaclient 2:17.0.0-0ubuntu1
all  client library for OpenStack Compute API - 3.x

# dpkg -l | grep barbican
ii  barbican-api  2:14.0.0-0ubuntu1~cloud0  
all  OpenStack Key Management Service - API Server
ii  barbican-common   2:14.0.0-0ubuntu1~cloud0  
all  OpenStack Key Management Service - common files
ii  barbican-keystone-listener2:14.0.0-0ubuntu1~cloud0  
all  OpenStack Key Management Service - Keystone Listener
ii  barbican-worker   2:14.0.0-0ubuntu1~cloud0  
all  OpenStack Key Management Service - Worker Node
ii  python3-barbican  2:14.0.0-0ubuntu1~cloud0  
all  OpenStack Key Management Service - Python 3 files
ii  python3-barbicanclient5.2.0-0ubuntu1~cloud0 
all  OpenStack Key Management API client - Python 3.x

2. Which hypervisor did you use?
Libvirt:
# dpkg -l | grep libvirt
ii  libvirt-daemon 6.0.0-0ubuntu8.16
amd64Virtualization daemon
ii  libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.16
amd64Virtualization daemon QEMU connection driver
ii  libvirt-daemon-driver-storage-rbd  6.0.0-0ubuntu8.16
amd64Virtualization daemon RBD storage driver
ii  libvirt0:amd64 6.0.0-0ubuntu8.16
amd64library for interfacing with different 
virtualization systems
ii  python3-libvirt6.1.0-1  
amd64libvirt Python 3 bindings

2. Which storage type did you use?
iSCSI Huawei dorado

3. Which networking type did you use?
Neutron linuxbridge

Logs & Configs
==
An error message from nova-compute log: 
https://paste.openstack.org/show/bNbPOHiQJOq8OsKZ5Gn2/

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: cinder volumes

** Description changed:

  Description
  ===
- After an upgrade of barbican from ussuri to yoga version there is no 
possibility to attach encrypted volumes created before an upgrade to any 
instance, because of an error: "libvirt.libvirtError: internal error: unable to 
execute QEMU c

[Yahoo-eng-team] [Bug 1995078] Re: OVN: HA chassis group priority is different than gateway chassis priority

2022-11-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/kolla-ansible/+/864510
Committed: 
https://opendev.org/openstack/kolla-ansible/commit/8bf8656dbad3def707eca2d8ddd2c9bfed389b86
Submitter: "Zuul (22348)"
Branch:master

commit 8bf8656dbad3def707eca2d8ddd2c9bfed389b86
Author: Bartosz Bezak 
Date:   Tue Nov 15 11:08:15 2022 +0100

Generate ovn-chassis-mac-mappings on ovn-controller group

Previously ovn-chassis-mac-mappings [1] has been added only to
ovn-controller-compute group. However external ports are being
scheduled on network nodes, therefore we need also do that there.

Closes-Bug: 1995078

[1] 
https://github.com/ovn-org/ovn/blob/v22.09.0/controller/ovn-controller.8.xml#L239

Change-Id: Ie62e9220bad56262cad602ca1480e6ca65827819


** Changed in: kolla-ansible
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1995078

Title:
  OVN: HA chassis group priority is different than gateway chassis
  priority

Status in kolla-ansible:
  Fix Released
Status in kolla-ansible wallaby series:
  New
Status in kolla-ansible xena series:
  New
Status in kolla-ansible yoga series:
  New
Status in kolla-ansible zed series:
  Fix Released
Status in neutron:
  Confirmed

Bug description:
  OpenStack release affected - Wallaby, Xena and Yoga for sure
  OVN version: 21.12 (from CentOS NFV SIG repos)
  Host OS: CentOS Stream 8

  Neutron creates External ports for bare metal instances and uses 
ha_chassis_group.
  Neutron normally defines a different priority for Routers LRP gateway chassis 
and ha_chassis_group.

  I have a router with two VLANs attached - external (used for internet
  connectivity - SNAT or DNAT/Floating IP) and internal VLAN network
  hosting bare metal servers (and some Geneve networks for VMs).

  If an External port’s HA chassis group active chassis is different
  than gateway chassis (external vlan network) active chassis - those
  bare metal servers have intermittent network connectivity for any
  traffic going through that router.

  In a thread on ovs-discuss ML - Numan Siddique wrote that "it is recommended 
that the
  same controller which is actively handling the gateway traffic also
  handles the external ports"

  More information in this thread -
  https://mail.openvswitch.org/pipermail/ovs-
  discuss/2022-October/052067.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1995078/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996527] Re: Unit test failure with Python 3.11

2022-11-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/864448
Committed: 
https://opendev.org/openstack/neutron/commit/c5ee9f349548b5bc842054aa26118e205c490aa2
Submitter: "Zuul (22348)"
Branch:master

commit c5ee9f349548b5bc842054aa26118e205c490aa2
Author: Brian Haley 
Date:   Mon Nov 14 18:06:20 2022 -0500

Load the required configuration options in the UT classes

Some test classes are not loading the required configuration options
during the setup process. That prevents from launching thoses tests
or classes individually. This patch solves this issue by importing the
required options in the "setUp" test class method.

This is breaking python 3.11 on Debian, not possible to test
in the gate at the moment.

Closes-Bug: #1996527

Change-Id: Ie579df7126ca8d09dbedad8d2254c79ec0d3bc32


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996527

Title:
  Unit test failure with Python 3.11

Status in neutron:
  Fix Released

Bug description:
  Hi,

  In Debian, we're trying to switch to 3.11 before the Bookworm freeze
  in January.

  Rebuilding Neutron under Python 3.11 fails:

  FAIL: 
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverPortsV2.test__port_provisioned_no_binding
  
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestOVNMechanismDriverPortsV2.test__port_provisioned_no_binding
  --
  testtools.testresult.real._StringException: Traceback (most recent call last):
File 
"/<>/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 2817, in setUp
  ovn_conf.cfg.CONF.set_override('ovn_metadata_enabled', False,
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2077, in 
__inner
  result = f(self, *args, **kwargs)
   
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2460, in 
set_override
  opt_info = self._get_opt_info(name, group)
 ^^^
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2869, in 
_get_opt_info
  group = self._get_group(group)
  ^^
File "/usr/lib/python3/dist-packages/oslo_config/cfg.py", line 2838, in 
_get_group
  raise NoSuchGroupError(group_name)
  oslo_config.cfg.NoSuchGroupError: no such group [ovn]

  and 145 more like this one...

  This looks weird to me though, and unrelated to OVN.

  Cheers,

  Thomas

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1996527/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1995078] Re: OVN: HA chassis group priority is different than gateway chassis priority

2022-11-15 Thread Michal Nasiadka
Hi Rodolfo,

At the moment it seems that it has fixed the issue. Basically we added
that in the past not thinking about external ports.

I marked it as invalid in Neutron, if we'll see any issues - we'll reopen in 
future.
Thanks for your help!

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1995078

Title:
  OVN: HA chassis group priority is different than gateway chassis
  priority

Status in kolla-ansible:
  Fix Released
Status in kolla-ansible wallaby series:
  In Progress
Status in kolla-ansible xena series:
  In Progress
Status in kolla-ansible yoga series:
  In Progress
Status in kolla-ansible zed series:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  OpenStack release affected - Wallaby, Xena and Yoga for sure
  OVN version: 21.12 (from CentOS NFV SIG repos)
  Host OS: CentOS Stream 8

  Neutron creates External ports for bare metal instances and uses 
ha_chassis_group.
  Neutron normally defines a different priority for Routers LRP gateway chassis 
and ha_chassis_group.

  I have a router with two VLANs attached - external (used for internet
  connectivity - SNAT or DNAT/Floating IP) and internal VLAN network
  hosting bare metal servers (and some Geneve networks for VMs).

  If an External port’s HA chassis group active chassis is different
  than gateway chassis (external vlan network) active chassis - those
  bare metal servers have intermittent network connectivity for any
  traffic going through that router.

  In a thread on ovs-discuss ML - Numan Siddique wrote that "it is recommended 
that the
  same controller which is actively handling the gateway traffic also
  handles the external ports"

  More information in this thread -
  https://mail.openvswitch.org/pipermail/ovs-
  discuss/2022-October/052067.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1995078/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996638] [NEW] horizon-nodejs16/18-run-test job failing in Ubuntu Jammy (22.04)

2022-11-15 Thread Vishal Manchanda
Public bug reported:

As per the 2023.1 cycles community-wide goal of migrating the CI/CD to
Ubuntu Jammy (22.04), we are testing all jobs in advance before the
actual migration happens on m-1 (Nov 18).

horizon-nodejs16-run-test job is failing on Jammy
- 
https://zuul.opendev.org/t/openstack/build/bbdfff9573894d469d7394a4da4c4274/log/job-output.txt#2564-2584

I guess we are hitting bug 
https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1951491/
I have already discussed this topic with openstack-infra team, please refer 
https://meetings.opendev.org/irclogs/%23openstack-infra/%23openstack-infra.2022-11-09.log.html#t2022-11-09T17:27:06

Right now if we migrate nodeset of "horiozon-nodejs16-run-test" job from
focal->jammy, it will be broken and all other horizon plugins which are
running this job are also broken [1].

[1] https://codesearch.openstack.org/?q=horizon-nodejs-
jobs&i=nope&literal=nope&files=&excludeFiles=&repos=

** Affects: horizon
 Importance: High
 Status: New

** Changed in: horizon
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1996638

Title:
  horizon-nodejs16/18-run-test job failing in Ubuntu Jammy (22.04)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  As per the 2023.1 cycles community-wide goal of migrating the CI/CD to
  Ubuntu Jammy (22.04), we are testing all jobs in advance before the
  actual migration happens on m-1 (Nov 18).

  horizon-nodejs16-run-test job is failing on Jammy
  - 
https://zuul.opendev.org/t/openstack/build/bbdfff9573894d469d7394a4da4c4274/log/job-output.txt#2564-2584

  I guess we are hitting bug 
https://bugs.launchpad.net/ubuntu/+source/snapd/+bug/1951491/
  I have already discussed this topic with openstack-infra team, please refer 
https://meetings.opendev.org/irclogs/%23openstack-infra/%23openstack-infra.2022-11-09.log.html#t2022-11-09T17:27:06

  Right now if we migrate nodeset of "horiozon-nodejs16-run-test" job
  from focal->jammy, it will be broken and all other horizon plugins
  which are running this job are also broken [1].

  [1] https://codesearch.openstack.org/?q=horizon-nodejs-
  jobs&i=nope&literal=nope&files=&excludeFiles=&repos=

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1996638/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1991024] Re: distros/manage_service: add support to disable services

2022-11-15 Thread James Falcon
This bug is believed to be fixed in cloud-init in version 22.4. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1991024

Title:
  distros/manage_service: add support to disable services

Status in cloud-init:
  Fix Released

Bug description:
  In trying to add BSD support for cc_ntp
  (https://bugs.launchpad.net/cloud-init/+bug/1990041) and cc_rsyslogd
  (https://bugs.launchpad.net/cloud-init/+bug/1798055) we'll also need
  to be able to *disable* services

  I would like to see the "disable" stanza added to distro's
  manage_service method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1991024/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1990070] Re: implement manage_service for BSDs

2022-11-15 Thread James Falcon
This bug is believed to be fixed in cloud-init in version 22.4. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1990070

Title:
  implement manage_service for BSDs

Status in cloud-init:
  Fix Released

Bug description:
  Distro.manage_services is currently only implemented for SystemD based Linux 
distros.
  SysV based BSDs and OpenRC based Alpine are left out.

  on BSDs, services are enabled in /etc/rc.conf (on FreeBSD, this can be
  done with sysrc(8) https://man.freebsd.org/sysrc(8), and
  started/stopped/restarted, etc… via service(8):
  https://man.freebsd.org/service(8) or rc.d(8):
  https://man.netbsd.org/rc.d.8

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1990070/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1990041] Re: cc_ntp: please add support for *BSD

2022-11-15 Thread James Falcon
This bug is believed to be fixed in cloud-init in version 22.4. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1990041

Title:
  cc_ntp: please add support for *BSD

Status in cloud-init:
  Fix Released

Bug description:
  *BSDs have ntpd installed in base the base system
  it is enabled via `ntpd_enable=YES` in `/etc/rc.conf`
  started with `service ntpd start`
  and configured in `/etc/ntp.conf`

  we should extend cc_ntp to support BSDs

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1990041/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1962343] Re: VMs hardening with the noexec option in /tmp and /var/tmp which is causing issues to get an IP with cloud-init , reason why the VM takes like 25 min to start

2022-11-15 Thread James Falcon
This bug is believed to be fixed in cloud-init in version 22.4. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1962343

Title:
  VMs hardening with the noexec option in /tmp and /var/tmp which is
  causing issues to get an IP with cloud-init , reason why the VM takes
  like 25 min to start

Status in cloud-init:
  Fix Released

Bug description:

  Hardening Azure VM - Ubuntu 18.04  with the noexec option in /tmp and
  /var/tmp is causing issues with  the dhclient to get an IP with cloud-
  init , reason why the VM takes like 25 min to start


   
  Hardening:
   
  root@ubu1804repro:~# cat /etc/fstab
  # CLOUD_IMG: This file was created/modified by the Cloud Image build process
  UUID=5b1ab5d4-8b76-46c5-928f-8db42fbe3af6   /ext4   
defaults,discard0 1
  UUID=91B6-4BB7  /boot/efi   vfatumask=0077  0 1
  UUID="fadc7d49-1a88-4eed-8964-94b78ee7dfa6" /tmp ext4 
rw,nodev,nosuid,noexec,discard 0 0
  /tmp /var/tmp none rw,noexec,nosuid,nodev,bind 0 0
  /dev/disk/cloud/azure_resource-part1   /mntauto
defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig   
0   2
   
   
  Error:
   
  [  OK  ] Reached target System Time Synchronized.
  [  OK  ] Started AppArmor initialization.
   Starting Load AppArmor profiles managed internally by snapd...
   Starting Initial cloud-init job (pre-networking)...
  [8.062136] sh[795]: + [ -e /var/lib/cloud/instance/obj.pkl ]
  [  OK  ] [8.097225] sh[795]: + echo cleaning persistent cloud-init object
  Started Load AppArmor profiles managed internally by snapd.
  [8.100207] sh[795]: cleaning persistent cloud-init object
  [8.106214] sh[795]: + rm /var/lib/cloud/instance/obj.pkl
  [8.112706] sh[795]: + exit 0
  [   14.435302] cloud-init[813]: Cloud-init v. 21.4-0ubuntu1~18.04.1 running 
'init-local' at Fri, 25 Feb 2022 17:18:50 +. Up 8.71 seconds.
  [   14.445225] cloud-init[813]: 2022-02-25 17:18:56,105 - dhcp.py[WARNING]: 
dhclient did not produce expected files: dhcp.leases, dhclient.pid
  [   14.453129] cloud-init[813]: 2022-02-25 17:18:56,107 - azure.py[WARNING]: 
exception while getting metadata:
  [   14.460876] cloud-init[813]: 2022-02-25 17:18:56,109 - azure.py[ERROR]: 
Could not crawl Azure metadata:
  [   19.626878] cloud-init[813]: 2022-02-25 17:19:01,297 - dhcp.py[WARNING]: 
dhclient did not produce expected files: dhcp.leases, dhclient.pid
  [   19.664700] cloud-init[813]: 2022-02-25 17:19:01,333 - azure.py[ERROR]: 
Failed to read /var/lib/dhcp/dhclient.eth0.leases: [Errno 2] No such file or 
directory: '/var/lib/dhcp/dhclient.eth0.leases'
  [   19.674221] cloud-init[813]: 2022-02-25 17:19:01,333 - azure.py[WARNING]: 
No lease found; using default endpoint: a8:3f:81:10
   
   
  Cloud-Init Version :
   
  root@ubu1804repro:~# cloud-init --version
  /usr/bin/cloud-init 21.4-0ubuntu1~18.04.1
  root@ubu1804repro:~# 
   
  OS version: 
   
  root@ubu1804repro:~# uname -a
  Linux ubu1804repro 5.4.0-1069-azure #72~18.04.1-Ubuntu SMP Mon Feb 7 11:12:24 
UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  root@ubu1804repro:~# 
   
   
  root@ubu1804repro:~# cat /etc/*rele*
  DISTRIB_ID=Ubuntu
  DISTRIB_RELEASE=18.04
  DISTRIB_CODENAME=bionic
  DISTRIB_DESCRIPTION="Ubuntu 18.04.6 LTS"
  NAME="Ubuntu"
  VERSION="18.04.6 LTS (Bionic Beaver)"
  ID=ubuntu
  ID_LIKE=debian
  PRETTY_NAME="Ubuntu 18.04.6 LTS"
  VERSION_ID="18.04"
  HOME_URL="https://www.ubuntu.com/";
  SUPPORT_URL="https://help.ubuntu.com/";
  BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/";
  
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy";
  VERSION_CODENAME=bionic
  UBUNTU_CODENAME=bionic
  root@ubu1804repro:~# 
   
  Workaround :  Remove the noexec option from /tmp and /tmp/var entries in 
/etc/fstab.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1962343/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1913461] Re: Need jinja templating in config files

2022-11-15 Thread James Falcon
This bug is believed to be fixed in cloud-init in version 22.4. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1913461

Title:
  Need jinja templating in config files

Status in cloud-init:
  Fix Released

Bug description:
  It may be interresting to parse config files in
  /etc/cloud/cloud.cfg/*.cfg for jinja templating.

  As an example :

  ## template: jinja
  #cloud-config

  runcmd:
- [hostname, "{{ ds.meta_data.tags.Name }}"]
# - [hostname, "$(cloud-init query ds.meta_data.tags.Name)"]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1913461/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1883122] Re: `cloud-init status` should distinguish between "permanently disabled" and "disabled for this boot"

2022-11-15 Thread James Falcon
This bug is believed to be fixed in cloud-init in version 22.4. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1883122

Title:
  `cloud-init status` should distinguish between "permanently disabled"
  and "disabled for this boot"

Status in cloud-init:
  Fix Released

Bug description:
  Using ds-identify and a systemd generator, cloud-init can detect that
  it should disable itself for a particular boot when there is nothing
  for it to do.  However, if on the next boot a datasource becomes
  applicable (e.g. a NoCloud/ConfigDrive device is presented to the
  system) then cloud-init will _not_ be disabled, because ds-identify
  will detect an applicable datasource.

  If users want a stronger guarantee that cloud-init will not run, then
  they can touch /etc/cloud/cloud-init.disabled, or add cloud-
  init=disabled to their grub configured kernel command line.  When they
  do so, cloud-init will _never_ run, regardless of the applicability of
  datasources.

  In both of these cases, `cloud-init status` reports "disabled".  This
  means that users who want to confirm that cloud-init will never run in
  the future given its current configuration have to check all the
  potential ways that cloud-init might be permanently disabled
  (/etc/..., kernel cmdline, maybe other options that I haven't
  documented here, maybe new options in the future) themselves.

  We should distinguish between these two modalities of "disabled" for
  users in our status output.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1883122/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1307667] Re: The include.txt example is not valid

2022-11-15 Thread James Falcon
This bug is believed to be fixed in cloud-init in version 22.4. If this
is still a problem for you, please make a comment and set the state back
to New

Thank you.

** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1307667

Title:
  The include.txt example is not valid

Status in cloud-init:
  Fix Released

Bug description:
  The example lists the following urls:
  http://www.ubuntu.com/robots.txt
  http://www.w3schools.com/html/lastpage.htm

  Both of which do not contain valid user-data info and when attempting
  on the Rackspace Debian 7 (Wheezy) (PVHVM) Image it appears to cause
  problems.  I don't think this is a high priority and would rate the
  severity really low.  More so confusion around the functionality.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1307667/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1996677] [NEW] [OVN] support update fixed_ips of metadata port

2022-11-15 Thread Liu Xie
Public bug reported:

In some scenarios, the customers want to modify the fixed_ips of metadata port.
We can workaround it by follow steps:

1.Fisrt, update fixed_ips of metaadta_port:
neutron port-update --fixed-ip 
subnet_id=e130a5c7-6f47-4c76-b245-cf05369f2161,ip_address=192.168.111.16 
460dffa9-e25a-437d-8252-ae9c5185aaab

2.Then, only trigger subnet updating:
neutron subnet-update --enable-dhcp e130a5c7-6f47-4c76-b245-cf05369f2161

3.Finally restart neutron-ovn-metadata-agent


I think it is a good requirement that support update fixed_ips of metadata 
port. Maybe we can implement it in neutron-ovn-metadata-agent that watching the 
UPDATE event of port_bidnings, and then update_datapath if row is related to 
metadata port.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996677

Title:
  [OVN] support update fixed_ips of metadata port

Status in neutron:
  New

Bug description:
  In some scenarios, the customers want to modify the fixed_ips of metadata 
port.
  We can workaround it by follow steps:

  1.Fisrt, update fixed_ips of metaadta_port:
  neutron port-update --fixed-ip 
subnet_id=e130a5c7-6f47-4c76-b245-cf05369f2161,ip_address=192.168.111.16 
460dffa9-e25a-437d-8252-ae9c5185aaab

  2.Then, only trigger subnet updating:
  neutron subnet-update --enable-dhcp e130a5c7-6f47-4c76-b245-cf05369f2161

  3.Finally restart neutron-ovn-metadata-agent

  
  I think it is a good requirement that support update fixed_ips of metadata 
port. Maybe we can implement it in neutron-ovn-metadata-agent that watching the 
UPDATE event of port_bidnings, and then update_datapath if row is related to 
metadata port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1996677/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp