[Yahoo-eng-team] [Bug 1808917] [NEW] RetryRequest shouldn't log stack trace by default, or it should be configuarble by the exception

2018-12-17 Thread Mike Kolesnik
Public bug reported:

I see the following littering the logs and it strikes me as wrong:

2018-12-18 01:01:46.259 34 DEBUG neutron.plugins.ml2.managers 
[req-196ce43f-2408-48f4-9c7e-bb90f66c9c14 - - - - -] DB exception raised by 
Mechanism driver 'opendaylight_v2' in update_port_precommit _call_on_drivers 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:434
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers Traceback (most 
recent call last):
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py", line 427, 
in _call_on_drivers
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 67, in wrapper
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers return 
method(*args, **kwargs)
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/ml2/mech_driver_v2.py", line 
117, in update_port_precommit
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers context, 
odl_const.ODL_PORT, odl_const.ODL_UPDATE)
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/ml2/mech_driver_v2.py", line 
87, in _record_in_journal
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 
ml2_context=context)
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/journal/journal.py", line 123, 
in record
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers raise 
exception.RetryRequest(e)
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers RetryRequest
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 

Since this is an explicit request by the operation to retry, and not some 
unexpected behavior, it shouldn't log the stack trace.
If you really want more fine grained control (over not logging the trace 
completely), a flag can be added to the exception to determine whether the log 
of it should contain the stack trace or not.

The code in question is here (also on master but this rocky url is simpler):
https://github.com/openstack/neutron/blob/stable/rocky/neutron/plugins/ml2/managers.py#L433

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1808917

Title:
  RetryRequest shouldn't log stack trace by default, or it should be
  configuarble by the exception

Status in neutron:
  New

Bug description:
  I see the following littering the logs and it strikes me as wrong:

  2018-12-18 01:01:46.259 34 DEBUG neutron.plugins.ml2.managers 
[req-196ce43f-2408-48f4-9c7e-bb90f66c9c14 - - - - -] DB exception raised by 
Mechanism driver 'opendaylight_v2' in update_port_precommit _call_on_drivers 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:434
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers Traceback (most 
recent call last):
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py", line 427, 
in _call_on_drivers
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 67, in wrapper
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers return 
method(*args, **kwargs)
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/ml2/mech_driver_v2.py", line 
117, in update_port_precommit
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers context, 
odl_const.ODL_PORT, odl_const.ODL_UPDATE)
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/ml2/mech_driver_v2.py", line 
87, in _record_in_journal
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 
ml2_context=context)
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/journal/journal.py", line 123, 
in record
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers raise 
exception.RetryRequest(e)
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers RetryRequest
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 

  Since this is an explicit request by the operation to retry, and not some 
unexpected behavior, it shouldn't log the stack trace.
  If you really want more fine grained control (over not logg

[Yahoo-eng-team] [Bug 1785656] [NEW] test_internal_dns.InternalDNSTest fails even though dns-integration extension isn't loaded

2018-08-06 Thread Mike Kolesnik
Public bug reported:

We're seeing this on the Networking-ODL CI [1].

The test
neutron_tempest_plugin.scenario.test_internal_dns.InternalDNSTest is
being executed even though there's a decorator to prevent it from
running [2]

Either the checker isn't working or something is missing, since other
DNS tests are being skipped automatically due to the extension not being
loaded.

[1] 
http://logs.openstack.org/91/584591/5/check/networking-odl-tempest-oxygen/df17c02/
[2] 
http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/scenario/test_internal_dns.py#n28

** Affects: networking-odl
 Importance: Critical
 Status: Confirmed

** Affects: neutron
 Importance: High
 Status: Confirmed

** Also affects: networking-odl
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1785656

Title:
  test_internal_dns.InternalDNSTest fails even though dns-integration
  extension isn't loaded

Status in networking-odl:
  Confirmed
Status in neutron:
  Confirmed

Bug description:
  We're seeing this on the Networking-ODL CI [1].

  The test
  neutron_tempest_plugin.scenario.test_internal_dns.InternalDNSTest is
  being executed even though there's a decorator to prevent it from
  running [2]

  Either the checker isn't working or something is missing, since other
  DNS tests are being skipped automatically due to the extension not
  being loaded.

  [1] 
http://logs.openstack.org/91/584591/5/check/networking-odl-tempest-oxygen/df17c02/
  [2] 
http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/scenario/test_internal_dns.py#n28

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1785656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550278] Re: tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* tempest tests are failing repeatedly in the gate for networking-ovn

2018-04-30 Thread Mike Kolesnik
Curretly marking Invalid, if this resurfaces please reopen

** Changed in: networking-odl
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550278

Title:
  tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* tempest tests
  are failing repeatedly in the gate for networking-ovn

Status in networking-odl:
  Invalid
Status in networking-ovn:
  Fix Released
Status in neutron:
  Incomplete

Bug description:
  We are seeing a lot of tempest failures for the tests 
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* 
  with the below error.

  Either we should fix the error or at least disable these tests
  temporarily.

  
  t156.9: 
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_stateless_no_ra[id-ae2f4a5d-03ff-4c42-a3b0-ce2fcb7ea832]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2016-02-26 07:29:46,168 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:test_dhcpv6_stateless_no_ra): 404 POST 
http://127.0.0.1:9696/v2.0/subnets 0.370s
  2016-02-26 07:29:46,169 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: {"subnet": {"cidr": "2003::/64", "ip_version": 6, "network_id": 
"4c7de56a-b059-4239-a5a0-94a53ba4929c", "gateway_ip": "2003::1", 
"ipv6_address_mode": "slaac"}}
  Response - Headers: {'content-length': '132', 'status': '404', 'date': 
'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-e21f771f-1a16-452a-9429-8a01f0409ae3'}
  Body: {"NeutronError": {"message": "Port 
598c23eb-1ae4-4010-a263-39f86240fd86 could not be found.", "type": 
"PortNotFound", "detail": ""}}
  2016-02-26 07:29:46,196 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET http://127.0.0.1:9696/v2.0/ports 
0.024s
  2016-02-26 07:29:46,197 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/ports', 'content-length': '13', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-f0966c23-c72f-4a6f-b113-5d88a6dd5912'}
  Body: {"ports": []}
  2016-02-26 07:29:46,250 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET 
http://127.0.0.1:9696/v2.0/subnets 0.052s
  2016-02-26 07:29:46,251 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/subnets', 'content-length': '457', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-3b29ba53-9ae0-4c0f-8c18-ec12db7a6bde'}
  Body: {"subnets": [{"name": "", "enable_dhcp": true, "network_id": 
"4c7de56a-b059-4239-a5a0-94a53ba4929c", "tenant_id": 
"631f9cb1391d41b6aba109afe06bc51b", "dns_nameservers": [], "gateway_ip": 
"2003::1", "ipv6_ra_mode": null, "allocation_pools": [{"start": "2003::2", 
"end": "2003:::::"}], "host_routes": [], "ip_version": 6, 
"ipv6_address_mode": "slaac", "cidr": "2003::/64", "id": 
"6bc2602c-2584-44cc-a6cd-b8af444f6403", "subnetpool_id": null}]}
  2016-02-26 07:29:46,293 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET 
http://127.0.0.1:9696/v2.0/routers 0.041s
  2016-02-26 07:29:46,293 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/routers', 'content-length': '15', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-2b883ce9-b10f-4a49-a854-450c341f9cd9'}
  Body: {"routers": []}
  }}}

  Traceback (most recent call last):
File "tempest/api/network/test_dhcp_ipv6.py", line 129, in 
test_dhcpv6_stateless_no_ra
  real_ip, eui_ip = self._get_ips_from_subnet(**kwargs)
File "tempest/api/network/test_dhcp_ipv6.py", line 91, in 
_get_ips_from_subnet
  subnet = self.create_subnet(self.network, **kwargs)
File "tempest/api/network/base.py", line 196, in create_subnet
  **kwargs)
File "tempest/lib/services/network/subnets_c

[Yahoo-eng-team] [Bug 1612433] Re: neutron-db-manage autogenerate is generating empty upgrades

2017-03-30 Thread Mike Kolesnik
This seems to not work properly in sub projects..

$ neutron-db-manage --subproject networking-odl revision -m "Add journal 
dependency table" --autogenerate 
  Running revision for networking-odl ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_service_function_params'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'cisco_firewall_associations'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_port_pair_groups'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'bgpvpn_router_associations'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_l7rules'
INFO  [alembic.autogenerate.compare] Detected removed table u'sfc_path_nodes'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_path_port_associations'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_chain_group_associations'
INFO  [alembic.autogenerate.compare] Detected removed table u'physical_locators'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'l2gw_alembic_version'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_l7policies'
INFO  [alembic.autogenerate.compare] Detected removed table u'firewall_rules_v2'
INFO  [alembic.autogenerate.compare] Detected removed table u'sfc_port_pairs'
INFO  [alembic.autogenerate.compare] Detected removed table u'alembic_version'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'firewall_groups_v2'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_uuid_intid_associations'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'l2gatewayinterfaces'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'alembic_version_lbaas'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_flow_classifiers'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'alembic_version_fwaas'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'firewall_group_port_associations_v2'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'opendaylightjournal'
INFO  [alembic.autogenerate.compare] Detected removed table u'l2gatewaydevices'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'lbaas_sessionpersistences'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'lbaas_loadbalanceragentbindings'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_members'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'opendaylight_maintenance'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'lbaas_loadbalancers'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'firewall_policy_rule_associations_v2'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_port_chain_parameters'
INFO  [alembic.autogenerate.compare] Detected removed table u'physical_ports'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_listeners'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'pending_ucast_macs_remotes'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'alembic_version_bgpvpn'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'bgpvpn_network_associations'
INFO  [alembic.autogenerate.compare] Detected removed table u'bgpvpns'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_port_pair_group_params'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_pools'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_flow_classifier_l7_parameters'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'lbaas_healthmonitors'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_chain_classifier_associations'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'firewall_router_associations'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_portpair_details'
INFO  [alembic.autogenerate.compare] Detected removed table u'sfc_port_chains'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'lbaas_loadbalancer_statistics'
INFO  [alembic.autogenerate.compare] Detected removed table u'l2gateways'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'firewall_policies_v2'
INFO  [alembic.autogenerate.compare] Detected removed table u'ucast_macs_locals'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'l2gatewayconnections'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_sni'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'alembic_version_sfc'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'ucast_macs_remotes'
INFO  [alembic.autogenerate.compare] Detected removed table u'logical_switches'
INFO  [alembic.autogenerate.compare] Detected removed table u'physical_switches'
INFO  [alembic.autogenerate.compare] Detected removed table

[Yahoo-eng-team] [Bug 1546910] Re: args pass to securitygroup precommit event should include the complete info

2017-01-08 Thread Mike Kolesnik
** Also affects: networking-odl/3.0-newton
   Importance: Undecided
   Status: New

** Changed in: networking-odl/3.0-newton
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546910

Title:
  args pass to securitygroup precommit event should include the complete
  info

Status in networking-odl:
  In Progress
Status in networking-odl 3.0-newton series:
  New
Status in neutron:
  In Progress

Bug description:
  We introduced the PRECOMMIT_XXX event, but in securitygroups_db.py,
  the kwargs passed to it do not include the complete info of DB like
  AFTER_XXX event. For example, the id of the new created sg/rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1546910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462871] [NEW] L2Population on OVS broken due to ofctl resactoring

2015-06-07 Thread Mike Kolesnik
Public bug reported:

The refactor [1] to seperate ofctl logic to a driver broke L2pop on OVS.

The L2 agent shows this error when receiving a call to add_tunnel_port:

2015-06-08 04:33:50.287 DEBUG neutron.agent.l2population_rpc 
[req-a3dcc834-e97d-471b-8cae-02b6b0c58325 None None] 
neutron.plugins.openvswitch.
agent.ovs_neutron_agent.OVSNeutronAgent method fdb_add_tun called with 
arguments (, , , {u'10.35.6.102': 
[PortInfo(mac_address=u'00:00:00:00:00:00', ip_address=u'0.0.0.0'), 
PortInfo(mac_
address=u'fa:16:3e:c6:17:9f', ip_address=u'10.0.0.2'), 
PortInfo(mac_address=u'fa:16:3e:c6:17:9f', 
ip_address=u'fd59:ade1:1482:0:f816:3eff:fec6
:179f')]}, >) {} from (pid=14807) wrapper 
/usr/lib/python2.7/site-packages/oslo_log/helpers.py:45
2015-06-08 04:33:50.287 ERROR neutron.agent.common.ovs_lib 
[req-a3dcc834-e97d-471b-8cae-02b6b0c58325 None None] OVS flows could not be 
applied
 on bridge br-tun
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib Traceback (most 
recent call last):
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
"/opt/openstack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.
py", line 448, in fdb_add
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib agent_ports, 
self._tunnel_port_lookup)
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 46, in wrapper
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib return 
method(*args, **kwargs)
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
"/opt/openstack/neutron/neutron/agent/l2population_rpc.py", line 234, in fdb
_add_tun
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib lvm.network_type)
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
"/opt/openstack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.
py", line 1169, in setup_tunnel_port
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib network_type)
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
"/opt/openstack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.
py", line 1135, in _setup_tunnel_port
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib ofport = 
br.add_tunnel_port(port_name,
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
"/opt/openstack/neutron/neutron/plugins/openvswitch/agent/openflow/ovs_ofctl/br_tun.py",
 line 246, in __getattr__
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib raise 
AttributeError(name)
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib AttributeError: 
add_tunnel_port
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib 

[1] https://review.openstack.org/#/c/160245/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l2-pop ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462871

Title:
  L2Population on OVS broken due to ofctl resactoring

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The refactor [1] to seperate ofctl logic to a driver broke L2pop on
  OVS.

  The L2 agent shows this error when receiving a call to
  add_tunnel_port:

  2015-06-08 04:33:50.287 DEBUG neutron.agent.l2population_rpc 
[req-a3dcc834-e97d-471b-8cae-02b6b0c58325 None None] 
neutron.plugins.openvswitch.
  agent.ovs_neutron_agent.OVSNeutronAgent method fdb_add_tun called with 
arguments (, , , {u'10.35.6.102': 
[PortInfo(mac_address=u'00:00:00:00:00:00', ip_address=u'0.0.0.0'), 
PortInfo(mac_
  address=u'fa:16:3e:c6:17:9f', ip_address=u'10.0.0.2'), 
PortInfo(mac_address=u'fa:16:3e:c6:17:9f', 
ip_address=u'fd59:ade1:1482:0:f816:3eff:fec6
  :179f')]}, >) {} from (pid=14807) wrapper 
/usr/lib/python2.7/site-packages/oslo_log/helpers.py:45
  2015-06-08 04:33:50.287 ERROR neutron.agent.common.ovs_lib 
[req-a3dcc834-e97d-471b-8cae-02b6b0c58325 None None] OVS flows could not be 
applied
   on bridge br-tun
  2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib Traceback (most 
recent call last):
  2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
"/opt/openstack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.
  py", line 448, in fdb_add
  2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib agent_ports, 
self._tunnel_port_lookup)
  2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 46, in wrapper
  2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib return 
method(*args, **kwargs)
  2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
"/opt/openstack/neutron/neutron/agent/l2population_rpc.py", line 234, in fdb
  _add_tun
  2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib 
lvm.network_type)
  2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 

[Yahoo-eng-team] [Bug 1401095] [NEW] HA router can't be manually scheduled on L3 agent

2014-12-10 Thread Mike Kolesnik
Public bug reported:

HA routers get scheduled automatically to L3 agents, you can view the
router using l3-agent-list-hosting-router

$ neutron l3-agent-list-hosting-router harouter2
+--+--++---+
| id   | host | admin_state_up | alive |
+--+--++---+
| 9c34ec17-9045-4744-ae82-1f65f72ce3bd | net1 | True   | :-)   |
| cf758b1b-423e-44d9-ab0f-cf0d524b3dac | net2 | True   | :-)   |
| f2aac1e3-7a00-47c3-b6c9-2543d4a2ba9a | net3 | True   | :-)   |
+--+--++---+

You can remove it from an agent using l3-agent-router-remove, but when using 
l3-agent-router-add you get a 409:
$ neutron l3-agent-router-add bff55e85-65f6-4299-a3bb-f0e1c1ee2a05 harouter2
Conflict (HTTP 409) (Request-ID: req-22c1bb67-f0f8-4194-b863-93b8bb561c83)

The log says:
2014-12-10 07:47:41.036 INFO neutron.api.v2.resource 
[req-22c1bb67-f0f8-4194-b863-93b8bb561c83 admin 
f1bb80396ef34197b30117dfef45bea8] create failed (client error): The router 
72b9f897-b84d-4270-a645-af38fe3bd838 has been already hosted by the L3 Agent 
9c34ec17-9045-4744-ae82-1f65f72ce3bd.

** Affects: neutron
 Importance: Undecided
 Assignee: Yoni (yshafrir)
 Status: New


** Tags: ha l3agent router scheduling

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401095

Title:
  HA router can't be manually scheduled on L3 agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  HA routers get scheduled automatically to L3 agents, you can view the
  router using l3-agent-list-hosting-router

  $ neutron l3-agent-list-hosting-router harouter2
  +--+--++---+
  | id   | host | admin_state_up | alive |
  +--+--++---+
  | 9c34ec17-9045-4744-ae82-1f65f72ce3bd | net1 | True   | :-)   |
  | cf758b1b-423e-44d9-ab0f-cf0d524b3dac | net2 | True   | :-)   |
  | f2aac1e3-7a00-47c3-b6c9-2543d4a2ba9a | net3 | True   | :-)   |
  +--+--++---+

  You can remove it from an agent using l3-agent-router-remove, but when using 
l3-agent-router-add you get a 409:
  $ neutron l3-agent-router-add bff55e85-65f6-4299-a3bb-f0e1c1ee2a05 harouter2
  Conflict (HTTP 409) (Request-ID: req-22c1bb67-f0f8-4194-b863-93b8bb561c83)

  The log says:
  2014-12-10 07:47:41.036 INFO neutron.api.v2.resource 
[req-22c1bb67-f0f8-4194-b863-93b8bb561c83 admin 
f1bb80396ef34197b30117dfef45bea8] create failed (client error): The router 
72b9f897-b84d-4270-a645-af38fe3bd838 has been already hosted by the L3 Agent 
9c34ec17-9045-4744-ae82-1f65f72ce3bd.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323267] [NEW] Network shouldn't be shared and external at the same time

2014-05-26 Thread Mike Kolesnik
Public bug reported:

Marking network as external represents a different usage for that specific 
network than an "ordinary" network.
It doesn't make sense to connect instances directly to the external network 
(otherwise you'd have used the network directly and not floating IPs).

For that reason, it also doesn't make sense to mark the network as
shared (and vice versa).

Currently it is allowed to mark a network as both shared and external,
this should be prevented to deter misconfiguration and misuse of the
network.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323267

Title:
  Network shouldn't be shared and external at the same time

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Marking network as external represents a different usage for that specific 
network than an "ordinary" network.
  It doesn't make sense to connect instances directly to the external network 
(otherwise you'd have used the network directly and not floating IPs).

  For that reason, it also doesn't make sense to mark the network as
  shared (and vice versa).

  Currently it is allowed to mark a network as both shared and external,
  this should be prevented to deter misconfiguration and misuse of the
  network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1323267/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293184] [NEW] Can't clear shared flag of unused network

2014-03-16 Thread Mike Kolesnik
Public bug reported:

A network marked as external can be used as a gateway for tenant routers, even 
though it's not necessarily marked as shared.
If the 'shared' attribute is changed from True to False for such a network you 
get an error:
Unable to reconfigure sharing settings for network sharetest. Multiple tenants 
are using it

This is clearly not the intention of the 'shared' field, so if there are
only service ports on the network there is no reason to block changing
it from shared to not shared.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1293184

Title:
  Can't clear shared flag of unused network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  A network marked as external can be used as a gateway for tenant routers, 
even though it's not necessarily marked as shared.
  If the 'shared' attribute is changed from True to False for such a network 
you get an error:
  Unable to reconfigure sharing settings for network sharetest. Multiple 
tenants are using it

  This is clearly not the intention of the 'shared' field, so if there
  are only service ports on the network there is no reason to block
  changing it from shared to not shared.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1293184/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp