[Yahoo-eng-team] [Bug 1917508] [NEW] Router create fails when router with same name already exists

2021-03-02 Thread Carlos Goncalves
Public bug reported:

Test
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_preserve_preexisting_port
failed with what appears to be an issue in the OVN ML2 driver. According
to Terry, "it looks like the create_lrouter call in impl_idl_ovn.py
passes may_exist, but the one in ovn_client.py which you are hitting
does not."

https://64e4c686e8a5385bf7e9-3a9e3dcf5065ad1abf1d1a27741d8ba4.ssl.cf5.rackcdn.com/775444/9/check
/tripleo-ci-centos-8-containers-
multinode/e812d2f/logs/undercloud/var/log/tempest/tempest_run.log

https://zuul.opendev.org/t/openstack/build/e812d2fb618b45118fca269af335d0f4/log/logs/subnode-1/var/log/containers/neutron/server.log#9668-9736

2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource 
[req-c24e459e-3cd0-457f-8111-4d6bd5e07d05 3cb9d2fb8de645868d440abcd947ea0c 
c2be1e148cc3473e9f880590eb3ae771 - default default] create failed: No details.: 
RuntimeError: OVSDB Error: {"details":"Transaction causes multiple rows in 
\"Logical_Router_Port\" table to have identical values 
(lrp-ffeb33eb-75bf-4330-9d96-188c1529bf18) for index on column \"name\".  First 
row, with UUID c4340403-af73-46c4-a27c-77969d0522fd, existed in the database 
before this transaction and was not modified by the transaction.  Second row, 
with UUID 42b137d3-14ad-4ead-bcdc-b2da5684511b, was inserted by this 
transaction.","error":"constraint violation"}
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource Traceback (most recent 
call last):
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron/api/v2/resource.py", line 98, in 
resource
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron/api/v2/base.py", line 437, in create
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 139, in wrapped
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource 
self.force_reraise()
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource raise self.value
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 135, in wrapped
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_db/api.py", line 154, in wrapper
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource 
self.force_reraise()
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource raise self.value
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_db/api.py", line 142, in wrapper
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 183, in wrapped
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource LOG.debug("Retry 
wrapper got retriable exception: %s", e)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, in __exit__
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource 
self.force_reraise()
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource raise self.value
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 179, in wrapped
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource return 
f(*dup_args, **dup_kwargs)
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron/api/v2/base.py", line 561, in _create
2021-03-02 15:52:21.648 29 ERROR neutron.api.v2.resource obj = 

[Yahoo-eng-team] [Bug 1914231] [NEW] IPv6 subnet creation in segmented network partially fails

2021-02-02 Thread Carlos Goncalves
Public bug reported:

Creation of an IPv6 subnet in a segmented network partially fails.
Neutron returns HTTP 400 but subnet record is created and shows up in
subnet list.

Master OpenStack deployment (OVN TripleO-based):
  - neutron a1e74ac
  - neutron-lib: 307b6be
  - neutronclient: 4963c7a
  - OVN: 2.13-20.09.0-17.el8.x86_64


$ openstack --os-cloud overcloud subnet create --network multi-segment-provider 
--network-segment segment-r2 --subnet-range 2001:db8:0:2::/64 --ip-version 6 
--ipv6-address-mode=slaac --ipv6-ra-mode=slaac subnet-segment-2-ipv6
BadRequestException: 400: Client Error for url: 
http://99.88.88.88:9696/v2.0/subnets, Invalid input for operation: Failed to 
create port on network e1503814-f257-4ccb-bb96-15337a345f77, because fixed_ips 
included invalid subnet fcdb0a79-b
d40-45d3-b0ee-8fa820c1b3e1.

$ openstack subnet list
+--+---+--+---+
| ID   | Name  | Network
  | Subnet|
+--+---+--+---+
| 19a5dc8a-897d-45b3-8384-97c1566f69b7 | private-subnet| 
2b78aa5a-7a70-489a-a2d6-82b31f77b279 | 10.0.0.0/24   |
| 789c1e83-a3cd-4e01-ae98-286dc5c40d1a | subnet-segment-3  | 
e1503814-f257-4ccb-bb96-15337a345f77 | 172.24.3.0/24 |
| eada0504-71cd-424f-8ca9-cca2b925fb6d | subnet-segment-2  | 
e1503814-f257-4ccb-bb96-15337a345f77 | 172.24.2.0/24 |
| fcdb0a79-bd40-45d3-b0ee-8fa820c1b3e1 | subnet-segment-2-ipv6 | 
e1503814-f257-4ccb-bb96-15337a345f77 | 2001:db8:0:2::/64 |
+--+---+--+---+

$ openstack subnet show subnet-segment-2-ipv6
+--+--+
| Field| Value|
+--+--+
| allocation_pools | 2001:db8:0:2::1-2001:db8:0:2:::: |
| cidr | 2001:db8:0:2::/64|
| created_at   | 2021-02-02T10:47:54Z |
| description  |  |
| dns_nameservers  |  |
| dns_publish_fixed_ip | None |
| enable_dhcp  | True |
| gateway_ip   | 2001:db8:0:2::   |
| host_routes  |  |
| id   | fcdb0a79-bd40-45d3-b0ee-8fa820c1b3e1 |
| ip_version   | 6|
| ipv6_address_mode| slaac|
| ipv6_ra_mode | slaac|
| name | subnet-segment-2-ipv6|
| network_id   | e1503814-f257-4ccb-bb96-15337a345f77 |
| prefix_length| None |
| project_id   | 10a643f5ca8a4667b8c7de126c312fc3 |
| revision_number  | 0|
| segment_id   | c6008184-3e32-4fbd-a413-e588a9c1249d |
| service_types| None |
| subnetpool_id| None |
| tags |  |
| updated_at   | 2021-02-02T10:47:54Z |
+--+--+

$ openstack network segment show segment-r2
+--+--+
| Field| Value|
+--+--+
| description  |  |
| id   | c6008184-3e32-4fbd-a413-e588a9c1249d |
| name | segment-r2   |
| network_id   | e1503814-f257-4ccb-bb96-15337a345f77 |
| network_type | flat |
| physical_network | r2   |
| segmentation_id  | None |
+--+--+


2021-02-02 10:47:34.823 30 INFO neutron.wsgi [-] 99.99.2.1 "GET / HTTP/1.1" 
status: 200  len: 227 time: 0.0046539
2021-02-02 10:47:48.333 33 DEBUG futurist.periodics [-] Submitting periodic 
callback 
'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.DBInconsistenciesPeriodics.check_for_fragmentation_support'
 

[Yahoo-eng-team] [Bug 1866353] [NEW] Neutron API returning HTTP 201 for SG rule create when not fully created yet

2020-03-06 Thread Carlos Goncalves
Public bug reported:

Neutron API returns HTTP 201 (Created) for security group rule create
requests, although it takes longer to apply the configuration to the
port. This means for a period of time the firewall on the port is
outdated, eventually posing a security risk or applications to
fail/misbehave. Even though not tested, it might even be that the
q-agent could completely miss the SG rule add event from the Neutron
server and never apply it.

The log below is of a security group rule create request from Octavia to
Neutron. Neutron returns HTTP 201 but the q-agent has not yet applied
the configuration. The Octavia tempest test expects the load balancer
VIP to conform to the security group rules but fails as the q-agent
still have not applied the new security group rule to the port yet.

Mar 03 17:33:24.786466 ubuntu-bionic-airship-kna1-0014969351 
octavia-worker[8605]: DEBUG octavia.controller.worker.v1.controller_worker [-] 
Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIP' 
(10c8bae1-19b1-4757-9530-12ac29384565) transitioned into state 'RUNNING' from 
state 'PENDING' {{(pid=8984) _task_receiver 
/usr/local/lib/python3.6/dist-packages/taskflow/listeners/logging.py:194}}
Mar 03 17:33:24.787574 ubuntu-bionic-airship-kna1-0014969351 
octavia-worker[8605]: DEBUG octavia.controller.worker.v1.tasks.network_tasks 
[None req-6bbb57f5-2a06-4e8e-9ddd-6da259333fd7 None None] Updating VIP of 
load_balancer 61145d72-04e1-49bd-bcb0-5c215ed217ea. {{(pid=8984) execute 
/opt/stack/octavia/octavia/controller/worker/v1/tasks/network_tasks.py:472}}
Mar 03 17:33:24.805139 ubuntu-bionic-airship-kna1-0014969351 
octavia-worker[8605]: DEBUG octavia.network.drivers.neutron.base [None 
req-6bbb57f5-2a06-4e8e-9ddd-6da259333fd7 None None] Neutron extension 
security-group found enabled {{(pid=8984) _check_extension_enabled 
/opt/stack/octavia/octavia/network/drivers/neutron/base.py:66}}
Mar 03 17:33:24.819184 ubuntu-bionic-airship-kna1-0014969351 
octavia-worker[8605]: DEBUG octavia.network.drivers.neutron.base [None 
req-6bbb57f5-2a06-4e8e-9ddd-6da259333fd7 None None] Neutron extension 
dns-integration is not enabled {{(pid=8984) _check_extension_enabled 
/opt/stack/octavia/octavia/network/drivers/neutron/base.py:70}}
Mar 03 17:33:24.832337 ubuntu-bionic-airship-kna1-0014969351 
octavia-worker[8605]: DEBUG octavia.network.drivers.neutron.base [None 
req-6bbb57f5-2a06-4e8e-9ddd-6da259333fd7 None None] Neutron extension qos found 
enabled {{(pid=8984) _check_extension_enabled 
/opt/stack/octavia/octavia/network/drivers/neutron/base.py:66}}
Mar 03 17:33:24.847909 ubuntu-bionic-airship-kna1-0014969351 
octavia-worker[8605]: DEBUG octavia.network.drivers.neutron.base [None 
req-6bbb57f5-2a06-4e8e-9ddd-6da259333fd7 None None] Neutron extension 
allowed-address-pairs found enabled {{(pid=8984) _check_extension_enabled 
/opt/stack/octavia/octavia/network/drivers/neutron/base.py:66}}
Mar 03 17:33:25.221590 ubuntu-bionic-airship-kna1-0014969351 
neutron-server[7030]: INFO neutron.wsgi [None 
req-137e4288-fac0-490b-b828-8b43a94f675c admin admin] 10.0.1.16,10.0.1.16 "POST 
/v2.0/security-group-rules HTTP/1.1" status: 201  len: 725 time: 0.1413145
Mar 03 17:33:25.224900 ubuntu-bionic-airship-kna1-0014969351 
octavia-worker[8605]: DEBUG octavia.controller.worker.v1.controller_worker [-] 
Task 'octavia.controller.worker.v1.tasks.network_tasks.UpdateVIP' 
(10c8bae1-19b1-4757-9530-12ac29384565) transitioned into state 'SUCCESS' from 
state 'RUNNING' with result 'None' {{(pid=8984) _task_receiver 
/usr/local/lib/python3.6/dist-packages/taskflow/listeners/logging.py:183}}
Mar 03 17:33:25.224298 ubuntu-bionic-airship-kna1-0014969351 
neutron-openvswitch-agent[7528]: DEBUG neutron.agent.resource_cache [None 
req-137e4288-fac0-490b-b828-8b43a94f675c admin admin] Received new resource 
SecurityGroupRule: 
SecurityGroupRule(created_at=2020-03-03T17:33:25Z,description='',direction='ingress',ethertype='IPv4',id=73e2e34d-a813-4846-8f85-2b8daae5d29c,port_range_max=8080,port_range_min=8080,project_id='e821f6bae64f4fa0bca1c230fbf4b364',protocol='tcp',remote_group_id=,remote_ip_prefix=192.0.1.0/32,revision_number=0,security_group_id=14216a23-b9c5-4cb3-b42d-c76b22c643ec,updated_at=2020-03-03T17:33:25Z)
 {{(pid=7528) record_resource_update 
/opt/stack/neutron/neutron/agent/resource_cache.py:192}}
Mar 03 17:33:25.224767 ubuntu-bionic-airship-kna1-0014969351 
neutron-openvswitch-agent[7528]: DEBUG neutron_lib.callbacks.manager [None 
req-137e4288-fac0-490b-b828-8b43a94f675c admin admin] Notify callbacks 
['neutron.api.rpc.handlers.securitygroups_rpc.SecurityGroupServerAPIShim._handle_sg_rule_update--9223372036854365827']
 for SecurityGroupRule, after_update {{(pid=7528) _notify_loop 
/usr/local/lib/python3.6/dist-packages/neutron_lib/callbacks/manager.py:193}}
Mar 03 17:33:25.225185 ubuntu-bionic-airship-kna1-0014969351 
neutron-openvswitch-agent[7528]: INFO neutron.agent.securitygroups_rpc [None 
req-137e4288-fac0-490b-b828-8b43a94f675c admin 

[Yahoo-eng-team] [Bug 1863213] [NEW] Spawning of DHCP processes fail: invalid netcat options

2020-02-13 Thread Carlos Goncalves
Public bug reported:

Devstack master, ML2/OVS, CentOS 7, Python 3.6.

No DHCP servers running. Instances fail to get a DHCP offer.

$ ps aux | egrep "dhcp|dnsmasq"
vagrant591  4.7  0.7 459196 114056 ?   Ss   07:26   0:33 
/usr/bin/python3.6 /usr/local/bin/neutron-dhcp-agent --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini
root  1057  0.0  0.0 102896  5472 ?S06:14   0:00 /sbin/dhclient 
-d -q -sf /usr/libexec/nm-dhcp-helper -pf /var/run/dhclient-eth0.pid -lf 
/var/lib/NetworkManager/dhclient-5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03-eth0.lease
 -cf /var/lib/NetworkManager/dhclient-eth0.conf eth0
root  1219 14.9  0.4 684168 77988 ?Sl   07:26   1:43 
/usr/bin/python3.6 /usr/local/bin/privsep-helper --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini 
--privsep_context neutron.privileged.default --privsep_sock_path 
/tmp/tmpxg0wq6j2/privsep.sock
root 14783  0.0  0.0 102896  2632 ?Ss   07:29   0:00 dhclient -v 
o-hm0 -cf /etc/dhcp/octavia/dhclient.conf
vagrant  18136  0.0  0.0 112716   988 pts/2S+   07:37   0:00 grep -E 
--color=auto dhcp|dnsmasq


Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.dhcp [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Building initial lease file: 
/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/leases 
{{(pid=591) _output_init_lease_file 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:681}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.dhcp [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Done building initial lease file 
/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/leases with 
contents:
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: 1581752053 
fa:16:3e:31:b7:ea 192.168.233.2 * *
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]:  {{(pid=591) 
_output_init_lease_file /opt/stack/neutron/neutron/agent/linux/dhcp.py:708}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.dhcp [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Building host file: 
/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/host 
{{(pid=591) _output_hosts_file 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:739}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.dhcp [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Done building host file 
/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/host 
{{(pid=591) _output_hosts_file 
/opt/stack/neutron/neutron/agent/linux/dhcp.py:780}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.utils [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Unable to access 
/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/pid; Error: 
[Errno 2] No such file or directory: 
'/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/pid' 
{{(pid=591) get_value_from_file 
/opt/stack/neutron/neutron/agent/linux/utils.py:262}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.utils [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Running command (rootwrap daemon): ['ip', 'netns', 'exec', 
'qdhcp-06d0ae0b-d730-4871-bef3-fa52e8638214', 'dnsmasq', '--no-hosts', '', 
'--pid-file=/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/pid',
 
'--dhcp-hostsfile=/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/host',
 
'--addn-hosts=/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/addn_hosts',
 
'--dhcp-optsfile=/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/opts',
 
'--dhcp-leasefile=/opt/stack/data/neutron/dhcp/06d0ae0b-d730-4871-bef3-fa52e8638214/leases',
 '--dhcp-match=set:ipxe,175', '--dhcp-userclass=set:ipxe6,iPXE', 
'--local-service', '--bind-dynamic', 
'--dhcp-range=set:subnet-879783df-943d-486d-8447-8730b9f3051a,192.168.233.0,static,255.255.255.0,86400s',
 '--dhcp-option-force=option:mtu,1450', '--dhcp-lease-max=256', '--conf-file=', 
'--domain=openstacklocal'] {{(pid=591) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:103}}
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: ERROR 
neutron.agent.linux.utils [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Exit code: 2; Stdin: ; Stdout: ; Stderr: /bin/ncat: unrecognized option 
'--no-hosts'
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: Ncat: Try 
`--help' or man(1) ncat for more information, usage options and help. QUITTING.
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: 
Feb 14 07:34:13 localhost.localdomain neutron-dhcp-agent[591]: DEBUG 
neutron.agent.linux.dhcp [None req-4193fed2-5a2d-4c24-b3b1-72673dc2e1fe None 
None] Spawning DHCP process for network 

[Yahoo-eng-team] [Bug 1575146] Re: [RFE] ovs port status should the same as physnet.

2017-11-13 Thread Carlos Goncalves
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1575146

Title:
  [RFE] ovs port status should the same as physnet.

Status in neutron:
  Fix Released

Bug description:
  In some caseļ¼Œ when physnet is down. VM should know the status of it.
  But now we don't.

  So mybe we will add a function of this. Maybe we should add a
  configure option.

  When 'True', the port in VM should be down when the physnet in host is
  down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1575146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639272] [NEW] LB agent not updating port status upon port misconfiguration

2016-11-04 Thread Carlos Goncalves
Public bug reported:

The Linux bridge agent does not update the status of a port once it is
no longer configured correctly. Nova or operator manually deleting ports
from bridges under the control of the agent is one example.

See: https://review.openstack.org/#/c/351675/4/specs/ocata/port-data-
plane-status.rst L230

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1639272

Title:
  LB agent not updating port status upon port misconfiguration

Status in neutron:
  New

Bug description:
  The Linux bridge agent does not update the status of a port once it is
  no longer configured correctly. Nova or operator manually deleting
  ports from bridges under the control of the agent is one example.

  See: https://review.openstack.org/#/c/351675/4/specs/ocata/port-data-
  plane-status.rst L230

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1639272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598081] [NEW] [RFE] Port status update

2016-07-01 Thread Carlos Goncalves
Public bug reported:

Neutron port status field represents the current status of a port in the
cloud infrastructure. The field can take one of the following values:
'ACTIVE', 'DOWN', 'BUILD' and 'ERROR'.

At present, if a network event occurs in the data-plane (e.g. virtual or
physical switch fails or one of its ports, cable gets pulled
unintentionally, infrastructure topology changes, etc.), connectivity to
logical ports may be affected and tenants' services interrupted. When
tenants/cloud administrators are looking up their resources' status
(e.g. Nova instances and services running in them, network ports, etc.),
they will wrongly see everything looks fine. The problem is that Neutron
will continue reporting port 'status' as 'ACTIVE'.

Many SDN Controllers managing network elements have the ability to
detect and report network events to upper layers. This allows SDN
Controllers' users to be notified of changes and react accordingly. Such
information could be consumed by Neutron so that Neutron could update
the 'status' field of those logical ports, and additionally generate a
notification message to the message bus.

However, Neutron misses a way to be able to receive such information
through e.g. ML2 driver or the REST API ('status' field is read-only).
There are pros and cons on both of these approaches as well as other
possible approaches. This RFE intends to trigger a discussion on how
Neutron could be improved to receive fault/change events from SDN
Controllers or even also from 3rd parties not in charge of controlling
the network (e.g. monitoring systems, human admins).

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598081

Title:
  [RFE] Port status update

Status in neutron:
  New

Bug description:
  Neutron port status field represents the current status of a port in
  the cloud infrastructure. The field can take one of the following
  values: 'ACTIVE', 'DOWN', 'BUILD' and 'ERROR'.

  At present, if a network event occurs in the data-plane (e.g. virtual
  or physical switch fails or one of its ports, cable gets pulled
  unintentionally, infrastructure topology changes, etc.), connectivity
  to logical ports may be affected and tenants' services interrupted.
  When tenants/cloud administrators are looking up their resources'
  status (e.g. Nova instances and services running in them, network
  ports, etc.), they will wrongly see everything looks fine. The problem
  is that Neutron will continue reporting port 'status' as 'ACTIVE'.

  Many SDN Controllers managing network elements have the ability to
  detect and report network events to upper layers. This allows SDN
  Controllers' users to be notified of changes and react accordingly.
  Such information could be consumed by Neutron so that Neutron could
  update the 'status' field of those logical ports, and additionally
  generate a notification message to the message bus.

  However, Neutron misses a way to be able to receive such information
  through e.g. ML2 driver or the REST API ('status' field is read-only).
  There are pros and cons on both of these approaches as well as other
  possible approaches. This RFE intends to trigger a discussion on how
  Neutron could be improved to receive fault/change events from SDN
  Controllers or even also from 3rd parties not in charge of controlling
  the network (e.g. monitoring systems, human admins).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1598081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513144] [NEW] Allow admin to mark agents down

2015-11-04 Thread Carlos Goncalves
Public bug reported:

Cloud administrators have monitoring systems externally placed watching
different types of resources of their cloud infrastructures. A cloud
infrastructure is comprehended not exclusively by an OpenStack instance
but also other components not managed by and possibly not visible to
OpenStack such as SDN controller, physical network elements, etc.

External systems may detect a fault on one of multiple of infrastructure
resources that subsequently may affect services being provided by
OpenStack. From a network perspective, an example of a fault can be the
crashing of openvswitch on a compute node.

When using the reference implementation (ovs + neutron-l2-agent),
neutron-l2-agent will continue reporting to the Neutron server its state
as alive (there's heartbeat; service's up ), although there's an
internal error caused by unreachability to the virtual bridge (br-int).
By means of external tools to OpenStack monitoring openvswitch, the
administrator knows there's something wrong and as a fault management
action he may want to explicitly set the agent state down.

Such action requires a new API exposed by Neutron allowing admins to set
(true/false) the aliveness state of Neutron agents.

This feature request goes in line with the work proposed to Nova [1] and
implemented in Liberty. The same is also being currently proposed to
Cinder [2]

[1] https://blueprints.launchpad.net/nova/+spec/mark-host-down
[2] https://blueprints.launchpad.net/cinder/+spec/mark-services-down

** Affects: neutron
 Importance: Undecided
 Assignee: Carlos Goncalves (cgoncalves)
 Status: New


** Tags: rfe

** Changed in: neutron
 Assignee: (unassigned) => Carlos Goncalves (cgoncalves)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513144

Title:
  Allow admin to mark agents down

Status in neutron:
  New

Bug description:
  Cloud administrators have monitoring systems externally placed
  watching different types of resources of their cloud infrastructures.
  A cloud infrastructure is comprehended not exclusively by an OpenStack
  instance but also other components not managed by and possibly not
  visible to OpenStack such as SDN controller, physical network
  elements, etc.

  External systems may detect a fault on one of multiple of
  infrastructure resources that subsequently may affect services being
  provided by OpenStack. From a network perspective, an example of a
  fault can be the crashing of openvswitch on a compute node.

  When using the reference implementation (ovs + neutron-l2-agent),
  neutron-l2-agent will continue reporting to the Neutron server its
  state as alive (there's heartbeat; service's up ), although there's an
  internal error caused by unreachability to the virtual bridge (br-
  int). By means of external tools to OpenStack monitoring openvswitch,
  the administrator knows there's something wrong and as a fault
  management action he may want to explicitly set the agent state down.

  Such action requires a new API exposed by Neutron allowing admins to
  set (true/false) the aliveness state of Neutron agents.

  This feature request goes in line with the work proposed to Nova [1]
  and implemented in Liberty. The same is also being currently proposed
  to Cinder [2]

  [1] https://blueprints.launchpad.net/nova/+spec/mark-host-down
  [2] https://blueprints.launchpad.net/cinder/+spec/mark-services-down

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp