[Yahoo-eng-team] [Bug 2058908] [NEW] fix auto_scheduler_network uderstanding dhcp_agents_per_network

2024-03-25 Thread Sahid Orentino
Public bug reported:

when using routed provided network there is condition which is bypassing the 
option dhcp_agents_per_network, which results that in a env with 3 agents and 
dhcp_agents_per_network=2, for a given network already well handled by 2 
agents. if you restart the third agent It will start to handle the
network also which will result to have 3 agents handling the network.

The issue in under auto_scheduler_network function.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2058908

Title:
  fix auto_scheduler_network uderstanding dhcp_agents_per_network

Status in neutron:
  New

Bug description:
  when using routed provided network there is condition which is bypassing the 
option dhcp_agents_per_network, which results that in a env with 3 agents and 
dhcp_agents_per_network=2, for a given network already well handled by 2 
agents. if you restart the third agent It will start to handle the
  network also which will result to have 3 agents handling the network.

  The issue in under auto_scheduler_network function.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2058908/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051729] [NEW] issue dhcp cleaning stale devices process when enable action

2024-01-30 Thread Sahid Orentino
Public bug reported:

When call driver enable is called. the cleanup_stale_devices function is 
invoked to remove stale devices
within the namespace. The method cleanup_stale_devices examines the
ports in the network to prevent the unintentional removal of
legitimate devices.

In a multisegment context, the initial device created might be deleted
during the second iteration. This occurs because the network variable
used in the loop is not a singular reference to the same object,
resulting in its ports not being updated by the ones created during
previous iterations.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051729

Title:
  issue dhcp cleaning stale devices process when enable action

Status in neutron:
  New

Bug description:
  When call driver enable is called. the cleanup_stale_devices function is 
invoked to remove stale devices
  within the namespace. The method cleanup_stale_devices examines the
  ports in the network to   prevent the unintentional removal of
  legitimate devices.

  In a multisegment context, the initial device created might be deleted
  during the second iteration. This occurs because the network variable
  used in the loop is not a singular reference to the same object,
  resulting in its ports not being updated by the ones created during
  previous iterations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051729/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051690] [NEW] when removing net for agent dnsmask constantly tries to restart

2024-01-30 Thread Sahid Orentino
Public bug reported:

When removing network for agent, dnsmask constantly tries to revive.

This has been observed when using multisegment. The external process
monitor is not well unregistered for that service.

This is because the correct helper to get the process identifier is not
used for unregister.

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051690

Title:
  when removing net for agent dnsmask constantly tries to restart

Status in neutron:
  In Progress

Bug description:
  When removing network for agent, dnsmask constantly tries to revive.

  This has been observed when using multisegment. The external process
  monitor is not well unregistered for that service.

  This is because the correct helper to get the process identifier is
  not used for unregister.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051690/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2049615] [NEW] multisegments: cleaning DHCP process for segment 0 should happen first

2024-01-17 Thread Sahid Orentino
Public bug reported:

With the new support of multi-segments some code has been added to clean
old dhcp setup for a network. that clean should happen first and clean
segment index == 0.

As list of segment for a given network does not come ordered by segment
index, in the process we can be in that situation of having network
setup for multi index 1 coming before index 0 which means it will be
destroyed by the clean resulting a missing setup.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2049615

Title:
  multisegments: cleaning DHCP process for segment 0 should happen first

Status in neutron:
  New

Bug description:
  With the new support of multi-segments some code has been added to
  clean old dhcp setup for a network. that clean should happen first and
  clean segment index == 0.

  As list of segment for a given network does not come ordered by
  segment index, in the process we can be in that situation of having
  network setup for multi index 1 coming before index 0 which means it
  will be destroyed by the clean resulting a missing setup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2049615/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2018398] [NEW] Wrong AZ gets showed when adding new compute node

2023-05-03 Thread Sahid Orentino
Public bug reported:

On a deployment with multi availability zones. When the operator adds a
new compute host, the service gets registered as part of
“default_availability_zone”.

This is an undesirable behavior for users as they see a new AZ appearing
which may not be related to the deployment the time window that the host
finally gets configured to its correct AZ.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2018398

Title:
  Wrong AZ gets showed when adding new compute node

Status in OpenStack Compute (nova):
  New

Bug description:
  On a deployment with multi availability zones. When the operator adds
  a new compute host, the service gets registered as part of
  “default_availability_zone”.

  This is an undesirable behavior for users as they see a new AZ
  appearing which may not be related to the deployment the time window
  that the host finally gets configured to its correct AZ.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2018398/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619002] Re: Networking API v2.0 in Networking API Reference missing information

2023-04-07 Thread Sahid Orentino
load balancer has been deprecated and removed I guess we can close this
one.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1619002

Title:
  Networking API v2.0 in Networking API Reference missing information

Status in neutron:
  Won't Fix

Bug description:
  In extensions, the loadbalancer object also has vip_port_id 
(http://developer.openstack.org/api-ref/networking/v2/index.html?expanded=show-load-balancer-details-detail#id3)
 - this does not appear on the documentation.
  ---
  Release: 0.4.1.dev4 on 'Sat Aug 27 19:31:24 2016, commit adef52e'
  SHA: 
  Source: Can't derive source file URL
  URL: http://developer.openstack.org/api-ref/networking/v2/index.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1619002/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2013045] Re: CI: MacvtapAgentTestCase

2023-03-29 Thread Sahid Orentino
*** This bug is a duplicate of bug 2012510 ***
https://bugs.launchpad.net/bugs/2012510

dup 2012510

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2013045

Title:
  CI: MacvtapAgentTestCase

Status in neutron:
  Invalid

Bug description:
  ft1.1: 
neutron.tests.functional.plugins.ml2.drivers.macvtap.agent.test_macvtap_neutron_agent.MacvtapAgentTestCase.test_get_all_devicestesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/macvtap/agent/test_macvtap_neutron_agent.py",
 line 47, in test_get_all_devices
  self.assertEqual(set([macvtap.link.address]),
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 394, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: {'3a:83:7e:60:34:b6'} != 
{'66:81:56:14:7d:0d'}

  
https://zuul.opendev.org/t/openstack/build/235c115c538f4f84b839f15b628339b6/logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2013045/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2013045] [NEW] CI: MacvtapAgentTestCase

2023-03-28 Thread Sahid Orentino
Public bug reported:

ft1.1: 
neutron.tests.functional.plugins.ml2.drivers.macvtap.agent.test_macvtap_neutron_agent.MacvtapAgentTestCase.test_get_all_devicestesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/macvtap/agent/test_macvtap_neutron_agent.py",
 line 47, in test_get_all_devices
self.assertEqual(set([macvtap.link.address]),
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 394, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: {'3a:83:7e:60:34:b6'} != 
{'66:81:56:14:7d:0d'}

https://zuul.opendev.org/t/openstack/build/235c115c538f4f84b839f15b628339b6/logs

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2013045

Title:
  CI: MacvtapAgentTestCase

Status in neutron:
  New

Bug description:
  ft1.1: 
neutron.tests.functional.plugins.ml2.drivers.macvtap.agent.test_macvtap_neutron_agent.MacvtapAgentTestCase.test_get_all_devicestesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/macvtap/agent/test_macvtap_neutron_agent.py",
 line 47, in test_get_all_devices
  self.assertEqual(set([macvtap.link.address]),
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 394, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: {'3a:83:7e:60:34:b6'} != 
{'66:81:56:14:7d:0d'}

  
https://zuul.opendev.org/t/openstack/build/235c115c538f4f84b839f15b628339b6/logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2013045/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1918145] Re: Slownesses on neutron API with many RBAC rules

2023-01-06 Thread Sahid Orentino
I think one of the first step that we can have is to remove the ORDER BY
as it creates the temporary filesort that you have mentioned in #9.

I may missing something, an order by UUID does not bring any kind value?

A second step would be to understand why the possible key object_id is
not used.

There is also another point, we can notice that we do filter per action,
but I think that we do not have an index on it, maybe we could also
investigate that point.


** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1918145

Title:
  Slownesses on neutron API with many RBAC rules

Status in neutron:
  Confirmed

Bug description:
  * Summary: Slownesses on neutron API with many RBAC rules

  * High level description: Sharing several networks or security groups
  to project drastically increase API response time on some routes
  (/networks or /server/detail).

  For quite some time we have observing that reponse times are
  increasing (slowly fur surely) on /networks calls. We have increased
  the number of Neutron workers, but in vain.

  Lately, we're observing that it's getting worse (reponse time form 5 to 370 
seconds). We discarded possible bottlenecks one by one (our service endpoint 
performance, neutron API configuration, etc).
  But we have found that some calls in the DB takes a lot of time. It seems 
they are stuck in the mariadb database (10.3.10). So we have captured a slow 
queries in mysql.

  An example of for /server/detail:
  -
  http://paste.openstack.org/show/803334/

  We can see that there are more than 2 millions of rows examinated, and
  around 1657 returned.

  An example of for /networks:
  
  http://paste.openstack.org/show/803337/
  Rows_sent: 517  Rows_examined: 223519

  * Pre-conditions:
  Database tables size:
  table:
  -   networkrbacs 16928 rows
  -   securitygrouprbacs 1691 rows
  -   keystone.project 1713 rows

  Control plane nodes are shared with some others services:
  - RMQ
  - mariadb
  - Openstack APIs
  - DHCP agents

  It seems the code of those lines are based on
  https://github.com/openstack/neutron-
  
lib/blob/698e4c8daa7d43018a71122ec5b0cd5b17b55141/neutron_lib/db/model_query.py#L120

  * Step-by-step reproduction steps:

  - Create a lot of projects (at least 1000)
  - Create a SG in admin account
  - Create fake networks (vlan, vxlan) with associated
  - Share the SG and all networks with all projects

  * Expected output: lower response time, less than 5 seconds
  (approximatively).

  * Actual output: May lead to gateway timeout.

  * Version:
    ** OpenStack version Stein releases for all components (neutron 14.2.0).
    ** CentOS 7.4 with kolla containers
    ** kolla-ansible for stein release

  * Environment: We operate all services in Openstack except for Cinder.

  * Perceived severity: Medium

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1918145/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1994967] [NEW] Evacuating instances should be stopped at virt-driver level

2022-10-27 Thread Sahid Orentino
Public bug reported:

The current behavior for an evacuated instance at destination node is to
have the virt-driver starting the virtual machine, then a compute API
call if needed to stop the instance.

A cleaner solution would be to have virt driver API handling an expected
state when spawned on  host.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1994967

Title:
  Evacuating instances should be stopped at virt-driver level

Status in OpenStack Compute (nova):
  New

Bug description:
  The current behavior for an evacuated instance at destination node is
  to have the virt-driver starting the virtual machine, then a compute
  API call if needed to stop the instance.

  A cleaner solution would be to have virt driver API handling an
  expected state when spawned on  host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1994967/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1959750] [NEW] potential performance issue when scheduling network segments

2022-02-02 Thread Sahid Orentino
Public bug reported:

During some investigations regarding segments we may have noticed
performance issues related to the current algorithm that schedules
network segments on hosts.

When an agent is reporting a change in segment, the process goes to the
function `auto_schedule_new_network_segments` with the list of the
segments that this host handles.

This function is retrieving from the segments the related networks, then
we can notice that the algorithm is running a double for loop. That one
iterates through network and per segments to schedule network segments
on all hosts.

for network_id in network_ids:
for segment in segments:
self._schedule_network(
payload.context, network_id, dhcp_notifier,
candidate_hosts=segment['hosts'])

Depending on the design chosen, in a setup that has hundred segments per
host with hundred networks and potentially segments that share the same
list of hosts, we will endup by calling _schedule_network 1 times
with duplication.

To avoid such duplication and unnecessary calls of _schedule_network for
the same hosts we may want to provide a datastructure that is storing
for each network the hosts already scheduled.

 for network_id in network_ids:
 for segment in segments:
 if not _already_scheduled(network_id, segment['hosts']):
 self._schedule_network(
 payload.context, network_id, dhcp_notifier,
 candidate_hosts=segment['hosts'])

With this same scenario, and by using such algorithm we may reduce the
number of call per the number of networks, 100.

Thanks,
s.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1959750

Title:
  potential performance issue when scheduling network segments

Status in neutron:
  New

Bug description:
  During some investigations regarding segments we may have noticed
  performance issues related to the current algorithm that schedules
  network segments on hosts.

  When an agent is reporting a change in segment, the process goes to
  the function `auto_schedule_new_network_segments` with the list of the
  segments that this host handles.

  This function is retrieving from the segments the related networks,
  then we can notice that the algorithm is running a double for loop.
  That one iterates through network and per segments to schedule network
  segments on all hosts.

  for network_id in network_ids:
  for segment in segments:
  self._schedule_network(
  payload.context, network_id, dhcp_notifier,
  candidate_hosts=segment['hosts'])

  Depending on the design chosen, in a setup that has hundred segments
  per host with hundred networks and potentially segments that share the
  same list of hosts, we will endup by calling _schedule_network 1
  times with duplication.

  To avoid such duplication and unnecessary calls of _schedule_network
  for the same hosts we may want to provide a datastructure that is
  storing for each network the hosts already scheduled.

   for network_id in network_ids:
   for segment in segments:
   if not _already_scheduled(network_id, segment['hosts']):
   self._schedule_network(
   payload.context, network_id, dhcp_notifier,
   candidate_hosts=segment['hosts'])

  With this same scenario, and by using such algorithm we may reduce the
  number of call per the number of networks, 100.

  Thanks,
  s.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1959750/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1835037] Re: Upgrade from bionic-rocky to bionic-stein failed migrations.

2019-07-23 Thread Sahid Orentino
I also proposed a fix for nova since 'nova-manage cellv2 update_cell' is
bugged for cell0.

  https://review.opendev.org/#/c/672045/

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1835037

Title:
  Upgrade from bionic-rocky to bionic-stein failed migrations.

Status in OpenStack nova-cloud-controller charm:
  In Progress
Status in OpenStack Compute (nova):
  New

Bug description:
  We were trying to upgrade from rocky to stein using the charm
  procedure described here:

  https://docs.openstack.org/project-deploy-guide/charm-deployment-
  guide/latest/app-upgrade-openstack.html

  and we got into this problem,

  
  2019-07-02 09:56:44 ERROR juju-log online_data_migrations failed
  b'Running batches of 50 until complete\nError attempting to run \n9 rows matched query 
populate_user_id, 0 
migrated\n+-+--+---+\n|
  Migration  | Total Needed | Completed 
|\n+-+--+---+\n|
 create_incomplete_consumers |  0   | 0 |\n| 
delete_build_requests_with_no_instance_uuid |  0   | 0 |\n| 
fill_virtual_interface_list |  0   | 0 |\n| 
migrate_empty_ratio |  0   | 0 |\n|  
migrate_keypairs_to_api_db |  0   | 0 |\n|   
migrate_quota_classes_to_api_db   |  0   | 0 |\n|
migrate_quota_limits_to_api_db   |  0   | 0 |\n|  
migration_migrate_to_uuid  |  0   | 0 |\n| 
populate_missing_availability_zones |  0   | 0 |\n| 
 populate_queued_for_delete |  0   | 0 |\n| 
  populate_user_id  |  9   | 0 |\n|
populate_uuids   |  0   | 0 |\n| 
service_uuids_online_data_migration |  0   | 0 
|\n+-+--+---+\nSome
 migrations failed unexpectedly. Check log for details.\n'

  What should we do to get this fixed?

  Regards,

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-nova-cloud-controller/+bug/1835037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831986] Re: fwaas_v2 - unable to associate port with firewall (PXC strict mode)

2019-06-07 Thread Sahid Orentino
Missing primary keys for firewall_group_port_associations_v2 and
firewall_policy_rule_associations_v2. The workaround is to change the
mode used [0].

  juju config percona-cluster pxc-strict-mode=PERMISSIVE

[0]
https://bugs.launchpad.net/ubuntu/+source/octavia/+bug/1826875/comments/3

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Sahid Orentino (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831986

Title:
  fwaas_v2 - unable to associate port with firewall (PXC strict mode)

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  New
Status in neutron-fwaas package in Ubuntu:
  New

Bug description:
  Impacts both Stein and Rocky (although rocky does not enable v2 just
  yet).

  542 a9761fa9124740028d0c1d70ff7aa542] DBAPIError exception wrapped from 
(pymysql.err.InternalError) (1105, 'Percona-XtraDB-Cluster prohibits use of DML 
command on a table (neutron.firewall_group_port_associations_v2) without an 
explicit primary key with pxc_strict_mode = ENFORCING or MASTER') [SQL: 'DELETE 
FROM firewall_group_port_associations_v2 WHERE 
firewall_group_port_associations_v2.firewall_group_id = 
%(firewall_group_id_1)s'] [parameters: {'firewall_group_id_1': 
'85a277d0-ebaf-4a5d-9d45-6a74b8f54372'}] (Background on this error at: 
http://sqlalche.me/e/2j85): pymysql.err.InternalError: (1105, 
'Percona-XtraDB-Cluster prohibits use of DML command on a table 
(neutron.firewall_group_port_associations_v2) without an explicit primary key 
with pxc_strict_mode = ENFORCING or MASTER')
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1193, in 
_execute_context
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 509, in 
do_execute
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/cursors.py", line 165, in execute
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters result 
= self._query(query)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/cursors.py", line 321, in _query
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 860, in query
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1061, in 
_read_query_result
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1349, in read
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1018, in 
_read_packet
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 384, in 
check_error
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters raise 
errorclass(errno, errval)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
pymysql.err.InternalError: (1105, 'Percona-XtraDB-Cluster prohibits use of DML 
command on a table (neutron.firewall_group_port_associations_v2) without an 
explicit primary key with pxc_strict_mode = ENFORCING or MASTER')
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: neutron-server 2:1

[Yahoo-eng-team] [Bug 1667736] Re: gate-neutron-fwaas-dsvm-functional failure after recent localrc change

2019-03-21 Thread Sahid Orentino
** Changed in: cloud-archive
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667736

Title:
  gate-neutron-fwaas-dsvm-functional failure after recent localrc change

Status in Ubuntu Cloud Archive:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron-fwaas package in Ubuntu:
  Fix Released

Bug description:
  eg. http://logs.openstack.org/59/286059/1/check/gate-neutron-fwaas-
  dsvm-functional/a0f2285/console.html

  2017-02-24 15:27:58.187720 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:26 : 
  source /opt/stack/new/devstack/localrc
  2017-02-24 15:27:58.187833 | 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh: line 26: 
/opt/stack/new/devstack/localrc: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1667736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1806079] Re: revert use of stestr in stable/pike

2019-03-21 Thread Sahid Orentino
The patch which reverts the problematic change in Nova has been released
in our packages for version 2:16.1.6-0ubuntu1~cloud0 [0]. Let's mark
this bug has Fix Released [1].

For upstream Nova, the community are against the revert, it should
probably be marked as won't fix.

[0] 
https://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/nova/commit/?h=stable/pike=f42d697d606bd1ceff54cce665fe80641956f932
[1] 
https://git.launchpad.net/~ubuntu-server-dev/ubuntu/+source/nova/commit/?h=stable/pike=21f71d906812a80ab3d1d96d22b04cf5744ed35c

** Changed in: cloud-archive
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1806079

Title:
  revert use of stestr in stable/pike

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  The following commit changed dependencies of nova in the stable/pike
  branch and switched it to use stestr. There aren't any other projects
  (as far as I can tell) that use stestr in pike. This causes issues,
  for example, the Ubuntu cloud archive for pike doesn't have stestr. If
  possible I think this should be reverted.

  
  commit 5939ae995fdeb2746346ebd81ce223e4fe891c85
  Date:   Thu Jul 5 16:09:17 2018 -0400

  Backport tox.ini to switch to stestr
  
  The pike branch was still using ostestr (instead of stestr) which makes
  running tests significantly different from queens or master. To make
  things behave the same way this commit backports most of the tox.ini
  from queens so that pike will behave the same way for running tests.
  This does not use the standard backport mechanism because it involves a
  lot of different commits over time. It's also not a functional change
  for nova itself, so the proper procedure is less important here.
  
  Change-Id: Ie207afaf8defabc1d1eb9332f43a9753a00f784d

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1806079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1815844] Re: iscsi multipath dm-N device only used on first volume attachment

2019-02-27 Thread Sahid Orentino
Basically the issue is related to 'find_multipaths "yes"' in
/etc/multipath.conf. The patch I proposed fix the issue but adds more
complexity to the algorithm which is already a bit tricky. So let see
whether upstream is going to accept it.

At least we should document something that using multipath should be
when multipathd configured like:

   find_multipaths "no"

I'm re-adding the charm-nova-compute to this bug so we add a not about
it in the doc of the option.


   

** Changed in: charm-nova-compute
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815844

Title:
  iscsi multipath dm-N device only used on first volume attachment

Status in OpenStack nova-compute charm:
  New
Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  New

Bug description:
  With nova-compute from cloud:xenial-queens and use-multipath=true
  iscsi multipath is configured and the dm-N devices used on the first
  attachment but subsequent attachments only use a single path.

  The back-end storage is a Purestorage array.
  The multipath.conf is attached
  The issue is easily reproduced as shown below:

  jog@pnjostkinfr01:~⟫ openstack volume create pure2 --size 10 --type pure
  +-+--+
  | Field   | Value|
  +-+--+
  | attachments | []   |
  | availability_zone   | nova |
  | bootable| false|
  | consistencygroup_id | None |
  | created_at  | 2019-02-13T23:07:40.00   |
  | description | None |
  | encrypted   | False|
  | id  | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status| None |
  | multiattach | False|
  | name| pure2|
  | properties  |  |
  | replication_status  | None |
  | size| 10   |
  | snapshot_id | None |
  | source_volid| None |
  | status  | creating |
  | type| pure |
  | updated_at  | None |
  | user_id | c1fa4ae9a0b446f2ba64eebf92705d53 |
  +-+--+

  jog@pnjostkinfr01:~⟫ openstack volume show pure2
  ++--+
  | Field  | Value|
  ++--+
  | attachments| []   |
  | availability_zone  | nova |
  | bootable   | false|
  | consistencygroup_id| None |
  | created_at | 2019-02-13T23:07:40.00   |
  | description| None |
  | encrypted  | False|
  | id | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status   | None |
  | multiattach| False|
  | name   | pure2|
  | os-vol-host-attr:host  | cinder@cinder-pure#cinder-pure   |
  | os-vol-mig-status-attr:migstat | None |
  | os-vol-mig-status-attr:name_id | None |
  | os-vol-tenant-attr:tenant_id   | 9be499fd1eee48dfb4dc6faf3cc0a1d7 |
  | properties |  |
  | replication_status | None |
  | size   | 10   |
  | snapshot_id| None |
  | source_volid   | None |
  | status | available|
  | type   | pure |
  | updated_at | 2019-02-13T23:07:41.00   |
  | user_id| 

[Yahoo-eng-team] [Bug 1815844] Re: iscsi multipath dm-N device only used on first volume attachment

2019-02-22 Thread Sahid Orentino
Patch proposed against os-brick here [0]

[0] https://review.openstack.org/#/c/638639/

** Also affects: os-brick
   Importance: Undecided
   Status: New

** Changed in: os-brick
 Assignee: (unassigned) => Sahid Orentino (sahid-ferdjaoui)

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1815844

Title:
  iscsi multipath dm-N device only used on first volume attachment

Status in OpenStack nova-compute charm:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in os-brick:
  New

Bug description:
  With nova-compute from cloud:xenial-queens and use-multipath=true
  iscsi multipath is configured and the dm-N devices used on the first
  attachment but subsequent attachments only use a single path.

  The back-end storage is a Purestorage array.
  The multipath.conf is attached
  The issue is easily reproduced as shown below:

  jog@pnjostkinfr01:~⟫ openstack volume create pure2 --size 10 --type pure
  +-+--+
  | Field   | Value|
  +-+--+
  | attachments | []   |
  | availability_zone   | nova |
  | bootable| false|
  | consistencygroup_id | None |
  | created_at  | 2019-02-13T23:07:40.00   |
  | description | None |
  | encrypted   | False|
  | id  | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status| None |
  | multiattach | False|
  | name| pure2|
  | properties  |  |
  | replication_status  | None |
  | size| 10   |
  | snapshot_id | None |
  | source_volid| None |
  | status  | creating |
  | type| pure |
  | updated_at  | None |
  | user_id | c1fa4ae9a0b446f2ba64eebf92705d53 |
  +-+--+

  jog@pnjostkinfr01:~⟫ openstack volume show pure2
  ++--+
  | Field  | Value|
  ++--+
  | attachments| []   |
  | availability_zone  | nova |
  | bootable   | false|
  | consistencygroup_id| None |
  | created_at | 2019-02-13T23:07:40.00   |
  | description| None |
  | encrypted  | False|
  | id | e286161b-e8e8-47b0-abe3-4df411993265 |
  | migration_status   | None |
  | multiattach| False|
  | name   | pure2|
  | os-vol-host-attr:host  | cinder@cinder-pure#cinder-pure   |
  | os-vol-mig-status-attr:migstat | None |
  | os-vol-mig-status-attr:name_id | None |
  | os-vol-tenant-attr:tenant_id   | 9be499fd1eee48dfb4dc6faf3cc0a1d7 |
  | properties |  |
  | replication_status | None |
  | size   | 10   |
  | snapshot_id| None |
  | source_volid   | None |
  | status | available|
  | type   | pure |
  | updated_at | 2019-02-13T23:07:41.00   |
  | user_id| c1fa4ae9a0b446f2ba64eebf92705d53 |
  ++--+

  Add the volume to an instance:
  jog@pnjostkinfr01:~⟫ openstack server add volume T1 pure2
  jog@pnjostkinfr01:~⟫ openstack server s