[Yahoo-eng-team] [Bug 1604064] Re: ovn ml2 mechanism driver tcp connectors

2016-07-18 Thread Ryan Moats
This may have neutron pieces that need to be fixed, but the defect as
written should also include the networking-ovn project.

Also, removed the ovn tag because that's not valid. Does neutron have a
pure ml2 tag now?


** Also affects: networking-ovn
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604064

Title:
  ovn ml2 mechanism driver tcp connectors

Status in networking-ovn:
  New
Status in neutron:
  New

Bug description:
  Bug description:
  When a TCP connection from the OVN ml2 mechanism driver dies (in my scenario, 
this is due to a UCARP fail over) a new TCP connection does not get generated 
for port monitoring.

  Reproduction steps:
  1. Set up UCARP between 2 nodes
  2. Set OVN north database and south database on both nodes
  3. Point the ml2 driver to the UCARP address (north and south ports)
  4. Point the ovn-controllers to the UCARP address (south database port)
  5. Boot a VM
  6. View VM entries in the north database and south database OVN tables
  7. See that port status is UP in north database
  8. See that Neutron still has status of VM as down

  **Temporary solution is to reboot neutron-server, thus resetting the TCP 
connections
  **I have not verified the problem is TCP connections, but it's currently my 
best guess.


  Linux Version: Ubuntu 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1604064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1603444] [NEW] Add migration test to VPNaaS functional job

2016-07-15 Thread Ryan Moats
Public bug reported:

Change set [1] requires that migration testing be added to the functional job
before it can merge.

[1] https://review.openstack.org/#/c/342335

** Affects: neutron
 Importance: High
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1603444

Title:
  Add migration test to VPNaaS functional job

Status in neutron:
  New

Bug description:
  Change set [1] requires that migration testing be added to the functional job
  before it can merge.

  [1] https://review.openstack.org/#/c/342335

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1603444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1600530] [NEW] Part 2 of engine DB facade breaks metadata agent

2016-07-09 Thread Ryan Moats
Public bug reported:

On devstack, tip of tree master:

Commit [1] breaks metadata agent on compute nodes with the following
error:

2016-07-09 16:49:52.427 ^[[01;31mCRITICAL neutron 
[^[[01;36mreq-ded33b76-bd78-4ce6-916e-53d8eedfc1d3 ^[[00;36mNone None^[[01;31m] 
^[[01;35m^[[01;31mArgumentError: Could not parse rfc1738 URL from string ''^M
^[[00m^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00mTraceback (most 
recent call last):^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/usr/local/bin/neutron-metadata-agent", line 10, in ^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m
sys.exit(main())^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/cmd/eventlet/agents/metadata.py", line 17, in main^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m
metadata_agent.main()^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/agent/metadata_agent.py", line 41, in main^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00mproxy.run()^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/agent/metadata/agent.py", line 298, in run^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m
mode=self._get_socket_mode())^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 372, in start^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m
self._launch(application, workers=workers)^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/wsgi.py", line 206, in _launch^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m
api.dispose()^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/db/api.py", line 110, in dispose^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m
get_engine().pool.dispose()^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/opt/stack/neutron/neutron/db/api.py", line 106, in get_engine^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00mreturn 
context_manager.get_legacy_facade().get_engine()^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 636, in get_legacy_facade^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00mreturn 
self._factory.get_legacy_facade()^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 256, in get_legacy_facade^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m
self._start()^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 338, in _start^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00mengine_args, 
maker_args)^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py", 
line 362, in _setup_for_connection^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m
sql_connection=sql_connection, **engine_kwargs)^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py", line 
112, in create_engine^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00murl = 
sqlalchemy.engine.url.make_url(sql_connection)^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 186, in 
make_url^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00mreturn 
_parse_rfc1738_args(name_or_url)^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m  File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py", line 235, in 
_parse_rfc1738_args^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m"Could not 
parse rfc1738 URL from string '%s'" % name)^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00mArgumentError: 
Could not parse rfc1738 URL from string ''^M
^[[01;31m2016-07-09 16:49:52.427 TRACE neutron ^[[01;35m^[[00m^M

Reverting commit [1] allows metadata agent to start

[1] https://review.openstack.org/312393

** Affects: neutron
 Importance: High
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1600530

Title:
  Part 2 of engine DB facade breaks metadata agent

Status in neutron:
  New

Bug description:
  On devstack, tip of tree master:

  Commit 

[Yahoo-eng-team] [Bug 1599343] [NEW] VPNaaS should have a fullstack job that actually tests itself, not neutron

2016-07-05 Thread Ryan Moats
Public bug reported:

As per the comments on [1], the neutron-vpnaas check and gate pipelines
include a fullstack job that tests the base Neutron L3 agent.  This job
should be replaced/augmented to actually deploy and test VPNaaS agent
itself.

** Affects: neutron
 Importance: High
 Status: New


** Tags: vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1599343

Title:
  VPNaaS should have a fullstack job that actually tests itself, not
  neutron

Status in neutron:
  New

Bug description:
  As per the comments on [1], the neutron-vpnaas check and gate
  pipelines include a fullstack job that tests the base Neutron L3
  agent.  This job should be replaced/augmented to actually deploy and
  test VPNaaS agent itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1599343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598466] [NEW] Neutron VPNaas gate functional tests failing on race condition

2016-07-02 Thread Ryan Moats
Public bug reported:

gate-neutron-vpnaas-dsvm-functional-sswan and gate-neutron-vpnaas-dsvm-
functional are failing on a race condition in
test_ipsec_site_connections_with_l3ha_routers:

ft1.4: 
neutron_vpnaas.tests.functional.common.test_scenario.TestIPSecScenario.test_ipsec_site_connections_with_l3ha_routers_StringException:
 Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "neutron_vpnaas/tests/functional/common/test_scenario.py", line 667, in 
test_ipsec_site_connections_with_l3ha_routers
self.check_ping(site1, site2, 0)
  File "neutron_vpnaas/tests/functional/common/test_scenario.py", line 519, in 
check_ping
timeout=8, count=4)
  File "/opt/stack/new/neutron/neutron/tests/common/net_helpers.py", line 110, 
in assert_ping
dst_ip])
  File "/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 876, in 
execute
log_fail_as_error=log_fail_as_error, **kwargs)
  File "/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in 
execute
raise RuntimeError(msg)
RuntimeError: Exit code: 1; Stdin: ; Stdout: PING 35.4.2.5 (35.4.2.5) 56(84) 
bytes of data.

--- 35.4.2.5 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

; Stderr:

** Affects: neutron
 Importance: High
 Status: New


** Tags: vpnaas

** Changed in: neutron
   Importance: Critical => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598466

Title:
  Neutron VPNaas gate functional tests failing on race condition

Status in neutron:
  New

Bug description:
  gate-neutron-vpnaas-dsvm-functional-sswan and gate-neutron-vpnaas-
  dsvm-functional are failing on a race condition in
  test_ipsec_site_connections_with_l3ha_routers:

  ft1.4: 
neutron_vpnaas.tests.functional.common.test_scenario.TestIPSecScenario.test_ipsec_site_connections_with_l3ha_routers_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "neutron_vpnaas/tests/functional/common/test_scenario.py", line 667, 
in test_ipsec_site_connections_with_l3ha_routers
  self.check_ping(site1, site2, 0)
File "neutron_vpnaas/tests/functional/common/test_scenario.py", line 519, 
in check_ping
  timeout=8, count=4)
File "/opt/stack/new/neutron/neutron/tests/common/net_helpers.py", line 
110, in assert_ping
  dst_ip])
File "/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 876, in 
execute
  log_fail_as_error=log_fail_as_error, **kwargs)
File "/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in 
execute
  raise RuntimeError(msg)
  RuntimeError: Exit code: 1; Stdin: ; Stdout: PING 35.4.2.5 (35.4.2.5) 56(84) 
bytes of data.

  --- 35.4.2.5 ping statistics ---
  1 packets transmitted, 0 received, 100% packet loss, time 0ms

  ; Stderr:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1598466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597362] [NEW] ML2 manager throws spurious error messages

2016-06-29 Thread Ryan Moats
Public bug reported:

ML2 manager, while extending the network dictionary, reports an logs an
ERROR level message if the network being created has no segments.  Since
this doesn't appear to halt operation, the error message pollutes the
log file.

Version: master

** Affects: neutron
 Importance: Undecided
 Assignee: Ryan Moats (rmoats)
 Status: In Progress

** Description changed:

- ML2 manager, while extending the network dictionary, reports an logs an ERROR
- level message if the network being created has no segments.  Since this 
doesn't
- appear to halt operation, the error message pollutes the log file.
+ ML2 manager, while extending the network dictionary, reports an logs an
+ ERROR level message if the network being created has no segments.  Since
+ this doesn't appear to halt operation, the error message pollutes the
+ log file.
  
  Version: master

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597362

Title:
  ML2 manager throws spurious error messages

Status in neutron:
  In Progress

Bug description:
  ML2 manager, while extending the network dictionary, reports an logs
  an ERROR level message if the network being created has no segments.
  Since this doesn't appear to halt operation, the error message
  pollutes the log file.

  Version: master

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580033] Re: neutron help update for updatable parameters

2016-05-10 Thread Ryan Moats
moving over to python-neutronclient as that is the project this is a bug
on.

** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
   Status: New => Triaged

** Changed in: python-neutronclient
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580033

Title:
  neutron help update for updatable parameters

Status in python-neutronclient:
  Triaged

Bug description:
  Currently neutron port-update has following parameters in the help

  stack@scalecntl:~/scale$ neutron port-update
  usage: neutron port-update [-h] [--request-format {json}] [--name NAME]
 [--fixed-ip subnet_id=SUBNET,ip_address=IP_ADDR]
 [--device-id DEVICE_ID]
 [--device-owner DEVICE_OWNER]
 [--admin-state-up {True,False}]
 [--security-group SECURITY_GROUP | 
--no-security-groups]
 [--extra-dhcp-opt EXTRA_DHCP_OPTS]
 [--qos-policy QOS_POLICY | --no-qos-policy]
 [--allowed-address-pair 
ip_address=IP_ADDR[,mac_address=MAC_ADDR]
 | --no-allowed-address-pairs]
 [--dns-name DNS_NAME | --no-dns-name]
 PORT
  neutron port-update: error: too few arguments
  Try 'neutron help port-update' for more information.

  It doesnot have following updateable parameters in the help even it is
  updatebale

  binding:vnic_type
  binding:host_id

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1580033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579312] Re: connection-limit in neutron lbaas-listener-create

2016-05-07 Thread Ryan Moats
As LBaasV1 is deprecated, it doesn't make sense to add a conditional
help statement

** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron
   Status: Invalid => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579312

Title:
  connection-limit in neutron lbaas-listener-create

Status in neutron:
  Opinion

Bug description:
  When I check help of neutron lbaas-listener-create,
  [root@opencos2 ~(keystone_admin)]# neutron lbaas-listener-create -h
  usage: neutron lbaas-listener-create [-h] [-f {shell,table,value}] [-c COLUMN]
   [--max-width ] [--prefix PREFIX]
   [--request-format {json,xml}]
   [--tenant-id TENANT_ID]
   [--admin-state-down]
   [--connection-limit CONNECTION_LIMIT]
   [--description DESCRIPTION] [--name NAME]
   --loadbalancer LOADBALANCER --protocol
   {TCP,HTTP,HTTPS} --protocol-port PORT

  LBaaS v2 Create a listener.

  optional arguments:
-h, --helpshow this help message and exit
--request-format {json,xml}
  The XML or JSON request format.
--tenant-id TENANT_ID
  The owner tenant ID.
--admin-state-downSet admin state up to false.
--connection-limit CONNECTION_LIMIT
  The maximum number of connections per second allowed
  for the vip. Positive integer or -1 for unlimited
  (default).
  In lbaasv1,one vip only has one pool.So you can add the following note:The 
maximum number of connections per second allowed for the vip.
  But in lbaasv2, first vip is abandoned;second,one loadbalancer can have more 
than one listeners,so this note is not suitable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1579312/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579080] Re: VPNaaS crash during VPN Connection creation

2016-05-06 Thread Ryan Moats
thanks, Paul, that's what my memory said, but it's good to verify it

** Changed in: neutron
   Status: New => Invalid

** Changed in: neutron
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1579080

Title:
  VPNaaS crash during VPN Connection creation

Status in neutron:
  Invalid

Bug description:
  I've a Liberty Openstack environment with a L3HA (VRRP)on Centos 7;
  the library used for VPN is Libreswan 3.15.

  During VPNaaS creation between two tenant of the same Opnetstack, the
  creation process report an error in vpn_agent.log file:

    File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 
504, in _process_router_update
   self._process_router_if_compatible(router)
     File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 
443, in _process_router_if_compatible
   self._process_updated_router(router)
     File "/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py", line 
457, in _process_updated_router
   ri.process(self)
     File "/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", 
line 379, in process
   super(HaRouter, self).process(agent)
     File "/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 359, 
in call
   self.logger(e)
     File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 195, 
in __exit__
   six.reraise(self.type_, self.value, self.tb)
     File "/usr/lib/python2.7/site-packages/neutron/common/utils.py", line 356, 
in call
   return func(*args, **kwargs)
     File "/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", 
line 691, in process
   self._process_internal_ports(agent.pd)
     File "/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py", 
line 395, in _process_internal_ports
   self.internal_network_added(p)
     File "/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", 
line 277, in internal_network_added
   self._disable_ipv6_addressing_on_interface(interface_name)
     File "/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", 
line 237, in _disable_ipv6_addressing_on_interface
   if self._should_delete_ipv6_lladdr(ipv6_lladdr):
     File "/usr/lib/python2.7/site-packages/neutron/agent/l3/ha_router.py", 
line 219, in _should_delete_ipv6_lladdr
   if manager.get_process().active:
   AttributeError: 'NoneType' object has no attribute 'get_process'

  I've this type of error only during VPN creation between two openstack
  tenant, VPN between external devices (Cisco ASA) work properly.

  The neutron-vpnaas service continuosly send this log in vpn_agent.log
  file until the VPN is delated and neutron-vpn-agent service restarted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1579080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1578897] Re: OVS: Add support for IPv6 addresses as tunnel endpoints

2016-05-06 Thread Ryan Moats
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1578897

Title:
  OVS: Add support for IPv6 addresses as tunnel endpoints

Status in neutron:
  Confirmed
Status in openstack-manuals:
  New

Bug description:
  https://review.openstack.org/257335
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 773394a1887bec6ab4c2ff0308f0e830e9a9089f
  Author: Frode Nordahl 
  Date:   Mon Dec 14 13:51:48 2015 +0100

  OVS: Add support for IPv6 addresses as tunnel endpoints
  
  Remove IPv4 restriction for local_ip configuration statement.
  
  Check for IP version mismatch of local_ip and remote_ip before creating
  tunnel.
  
  Create hash of remote IPv6 address for OVS interface/port name with least
  posibility for collissions.
  
  Fix existing tests that fail because of the added check for IP version
  and subsequently valid IP addresses in _setup_tunnel_port.
  
  DocImpact
  
  Change-Id: I9ec137ef8c688b678a0c61f07e9a01382acbeb13
  Closes-Bug: #1525895

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1578897/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577306] Re: coverage post job fails

2016-05-02 Thread Ryan Moats
Looking in logstash, this doesn't appear *that* often in the past seven
days (only seven times) but appears to be cross project - reassigning to
those projects for further triage.

** Project changed: neutron => oslo.concurrency

** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577306

Title:
  coverage post job fails

Status in oslo.concurrency:
  New
Status in python-glanceclient:
  New

Bug description:
  After each merge, coverage is run - and fails currently.

  Example:

  http://logs.openstack.org/18/187415f34dd34169a563088dbb216ab2ed533992/post
  /neutron-coverage/c93fb3a/

  Error is:
  Unrecognized option '[report] ignore-errors=' in config file .coveragerc

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.concurrency/+bug/1577306/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575402] Re: VPNaaS update NAT rules can generate stack trace

2016-04-27 Thread Ryan Moats
This isn't a bug on tip of tree master, it's a bug on master+1.  It can
be reopened once the referenced patch merges.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1575402

Title:
  VPNaaS update NAT rules can generate stack trace

Status in neutron:
  Invalid

Bug description:
  neutron-vpn-agent can generate stack traces with AttributeError:
  'NoneType' object has no attribute 'ipv4' in two different locations
  in ipsec.py, in add_nat_rule() and remove_nat_rule(), during sync()
  operations while site connections are being created.

  Here is an example stack trace (based on a Liberty distribution, but I
  believe this is still an issue in master/newton):

  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 142, in _dispatch_and_reply
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher 
executor_callback))
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 186, in _dispatch
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher 
executor_callback)
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 129, in _do_dispatch
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 675, in vpnservice_updated
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher 
self.sync(context, [router] if router else [])
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/oslo_concurrency/lockutils.py",
 line 271, in inner
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher return 
f(*args, **kwargs)
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 830, in sync
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher 
self._delete_vpn_processes(sync_router_ids, router_ids)
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 860, in _delete_vpn_processes
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher 
self.destroy_process(process_id)
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 728, in destroy_process
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher 
self._update_nat(vpnservice, self.remove_nat_rule)
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 664, in _update_nat
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher 
top=True)
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher   File 
"/opt/stack/venv/neutron-20160426T025546Z/lib/python2.7/site-packages/neutron_vpnaas/services/vpn/device_drivers/ipsec.py",
 line 629, in remove_nat_rule
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher 
iptables_manager.ipv4['nat'].remove_rule(chain, rule, top=top)
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher 
AttributeError: 'NoneType' object has no attribute 'ipv4'
  2016-04-26 20:26:27.894 28022 ERROR oslo_messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1575402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1575247] [NEW] network query when creating subnet looks too complex

2016-04-26 Thread Ryan Moats
Public bug reported:

When creating a subnet, the network query appears to translate to:

   67 Query SELECT networks.tenant_id AS 
networks_tenant_id, networks.id AS networks_id, networks.name AS networks_name, 
networks.status AS networks_status, networks.admin_state_up AS 
networks_admin_state_up, networks.mtu AS networks_mtu, 
networks.vlan_transparent AS networks_vlan_transparent, 
networks.availability_zone_hints AS networks_availability_zone_hints, 
networks.standard_attr_id AS networks_standard_attr_id, 
subnetpoolprefixes_1.cidr AS subnetpoolprefixes_1_cidr, 
subnetpoolprefixes_1.subnetpool_id AS subnetpoolprefixes_1_subnetpool_id, 
standardattributes_1.created_at AS standardattributes_1_created_at, 
standardattributes_1.updated_at AS standardattributes_1_updated_at, 
standardattributes_1.id AS standardattributes_1_id, 
standardattributes_1.resource_type AS standardattributes_1_resource_type, 
standardattributes_1.description AS standardattributes_1_description, 
tags_1.standard_attr_id AS tags_1_standard_attr_id, tags_1.tag AS tags_1_tag, 
subnetpools_1.tenant_id AS subnetpo
 ols_1_tenant_id, subnetpools_1.id AS subnetpools_1_id, subnetpools_1.name AS 
subnetpools_1_name, subnetpools_1.ip_version AS subnetpools_1_ip_version, 
subnetpools_1.default_prefixlen AS subnetpools_1_default_prefixlen, 
subnetpools_1.min_prefixlen AS subnetpools_1_min_prefixlen, 
subnetpools_1.max_prefixlen AS subnetpools_1_max_prefixlen, 
subnetpools_1.shared AS subnetpools_1_shared, subnetpools_1.is_default AS 
subnetpools_1_is_default, subnetpools_1.default_quota AS 
subnetpools_1_default_quota, subnetpools_1.hash AS subnetpools_1_hash, 
subnetpools_1.address_scope_id AS subnetpools_1_address_scope_id, 
subnetpools_1.standard_attr_id AS subnetpools_1_standard_attr_id, 
ipallocationpools_1.id AS ipallocationpools_1_id, ipallocationpools_1.subnet_id 
AS ipallocationpools_1_subnet_id, ipallocationpools_1.first_ip AS 
ipallocationpools_1_first_ip, ipallocationpools_1.last_ip AS 
ipallocationpools_1_last_ip, dnsnameservers_1.address AS 
dnsnameservers_1_address, dnsnameservers_1.subnet_id AS dnsn
 ameservers_1_subnet_id, dnsnameservers_1.`order` AS dnsnameservers_1_order, 
subnetroutes_1.destination AS subnetroutes_1_destination, 
subnetroutes_1.nexthop AS subnetroutes_1_nexthop, subnetroutes_1.subnet_id AS 
subnetroutes_1_subnet_id, networkrbacs_1.tenant_id AS networkrbacs_1_tenant_id, 
networkrbacs_1.id AS networkrbacs_1_id, networkrbacs_1.target_tenant AS 
networkrbacs_1_target_tenant, networkrbacs_1.action AS networkrbacs_1_action, 
networkrbacs_1.object_id AS networkrbacs_1_object_id, 
standardattributes_2.created_at AS standardattributes_2_created_at, 
standardattributes_2.updated_at AS standardattributes_2_updated_at, 
standardattributes_2.id AS standardattributes_2_id, 
standardattributes_2.resource_type AS standardattributes_2_resource_type, 
standardattributes_2.description AS standardattributes_2_description, 
tags_2.standard_attr_id AS tags_2_standard_attr_id, tags_2.tag AS tags_2_tag, 
subnets_1.tenant_id AS subnets_1_tenant_id, subnets_1.id AS subnets_1_id, 
subnets_1.name AS
  subnets_1_name, subnets_1.network_id AS subnets_1_network_id, 
subnets_1.subnetpool_id AS subnets_1_subnetpool_id, subnets_1.ip_version AS 
subnets_1_ip_version, subnets_1.cidr AS subnets_1_cidr, subnets_1.gateway_ip AS 
subnets_1_gateway_ip, subnets_1.enable_dhcp AS subnets_1_enable_dhcp, 
subnets_1.ipv6_ra_mode AS subnets_1_ipv6_ra_mode, subnets_1.ipv6_address_mode 
AS subnets_1_ipv6_address_mode, subnets_1.standard_attr_id AS 
subnets_1_standard_attr_id, networkrbacs_2.tenant_id AS 
networkrbacs_2_tenant_id, networkrbacs_2.id AS networkrbacs_2_id, 
networkrbacs_2.target_tenant AS networkrbacs_2_target_tenant, 
networkrbacs_2.action AS networkrbacs_2_action, networkrbacs_2.object_id AS 
networkrbacs_2_object_id, agents_1.id AS agents_1_id, agents_1.agent_type AS 
agents_1_agent_type, agents_1.`binary` AS agents_1_binary, agents_1.topic AS 
agents_1_topic, agents_1.host AS agents_1_host, agents_1.availability_zone AS 
agents_1_availability_zone, agents_1.admin_state_up AS agents_1_admin_state_
 up, agents_1.created_at AS agents_1_created_at, agents_1.started_at AS 
agents_1_started_at, agents_1.heartbeat_timestamp AS 
agents_1_heartbeat_timestamp, agents_1.description AS agents_1_description, 
agents_1.configurations AS agents_1_configurations, agents_1.resource_versions 
AS agents_1_resource_versions, agents_1.`load` AS agents_1_load, 
standardattributes_3.created_at AS standardattributes_3_created_at, 
standardattributes_3.updated_at AS standardattributes_3_updated_at, 
standardattributes_3.id AS standardattributes_3_id, 
standardattributes_3.resource_type AS standardattributes_3_resource_type, 
standardattributes_3.description AS standardattributes_3_description, 
tags_3.standard_attr_id AS tags_3_standard_attr_id, tags_3.tag AS tags_3_tag, 
externalnetworks_1.network_id AS externalnetworks_1_network_id, 

[Yahoo-eng-team] [Bug 1535850] [NEW] multinode check pipeline failing on Permission denied for scp

2016-01-19 Thread Ryan Moats
*** This bug is a duplicate of bug 1531187 ***
https://bugs.launchpad.net/bugs/1531187

Public bug reported:

multinode check pipeline jobs are failing with this signature:

scp: /opt/stack/new/devstack/localrc: Permission denied

Logstash query:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%20%5C%22scp%3A%20%2Fopt%2Fstack%2Fnew%2Fdevstack%2Flocalrc%3A%20Permission%20denied%5C%22

shows this failure has occurred 14 times in the last seven days in the
gate-tempest-dsvm-neutron-multinode-full job only

** Affects: neutron
 Importance: Undecided
 Status: Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535850

Title:
  multinode check pipeline failing on Permission denied for scp

Status in neutron:
  Invalid

Bug description:
  multinode check pipeline jobs are failing with this signature:

  scp: /opt/stack/new/devstack/localrc: Permission denied

  Logstash query:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%20%5C%22scp%3A%20%2Fopt%2Fstack%2Fnew%2Fdevstack%2Flocalrc%3A%20Permission%20denied%5C%22

  shows this failure has occurred 14 times in the last seven days in the
  gate-tempest-dsvm-neutron-multinode-full job only

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532338] [NEW] ipv6 neighbor advertisement storm seen in stable/liberty cloud

2016-01-08 Thread Ryan Moats
Public bug reported:

stable liberty cloud with 20 network nodes, running OVS and supporting
1200 projects.  Each project has on network with IPv4 and IPv6 subnets
and one project router to attach to the external network.

Network nodes are seeing 1000 IPv6 Neighbour Advertisements within 2.3
seconds.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1532338

Title:
  ipv6 neighbor advertisement storm seen in stable/liberty cloud

Status in neutron:
  New

Bug description:
  stable liberty cloud with 20 network nodes, running OVS and supporting
  1200 projects.  Each project has on network with IPv4 and IPv6 subnets
  and one project router to attach to the external network.

  Network nodes are seeing 1000 IPv6 Neighbour Advertisements within 2.3
  seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1532338/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524898] [NEW] Volume based live migration aborted unexpectedly

2015-12-10 Thread Ryan Moats
Public bug reported:

Volume based live migration is failing during tempest testing in the
check and experimental pipelines

http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:\%22Live%20Migration%20failure:%20operation%20failed:%20migration%20job:%20unexpectedly%20failed\%22%20AND%20tags:\%22screen-n-cpu.txt\%22

shows 42 failures since 12/8

** Affects: nova
 Importance: High
 Status: Confirmed


** Tags: libvirt live-migration volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1524898

Title:
  Volume based live migration aborted unexpectedly

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Volume based live migration is failing during tempest testing in the
  check and experimental pipelines

  
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:\%22Live%20Migration%20failure:%20operation%20failed:%20migration%20job:%20unexpectedly%20failed\%22%20AND%20tags:\%22screen-n-cpu.txt\%22

  shows 42 failures since 12/8

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1524898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335640] Re: Neutron doesn't support OSprofiler

2015-12-08 Thread Ryan Moats
** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1335640

Title:
  RFE: Neutron doesn't support OSprofiler

Status in neutron:
  New

Bug description:
  To be able to improve OpenStack, and as Neutron we should have cross
  service/project profiler that will build one trace that goes through
  all services/projects and measures most important parts.

  So I build special for OpenStack library that allows us to do this. More 
about it:
  https://github.com/stackforge/osprofiler

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1335640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513048] Re: the container can not be pinged via name space, after 860 tenants/networks/container created

2015-11-04 Thread Ryan Moats
Changed this to invalid/low as this is a defect against kilo which is
now (according to [1]) security supported.  This needs to be retested
with liberty/master and re-filed.

[1] https://wiki.openstack.org/wiki/Releases

** Changed in: neutron
   Status: Triaged => Opinion

** Changed in: neutron
   Importance: High => Wishlist

** Changed in: neutron
   Status: Opinion => Invalid

** Changed in: neutron
   Importance: Wishlist => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1513048

Title:
  the container can not be pinged via name space, after 860
  tenants/networks/container created

Status in neutron:
  Invalid

Bug description:
  [Summary]
  the container can not be pinged via name space, after 860 
tenants/networks/container created

  [Topo]
  1 controller, 2 network nodes, 6 compute nodes, all in ubuntu 14.04
  (openstack version is 2015.1.2, linux kernel version is 3.19.0-31)
  root@ah:~# uname -a
  Linux ah.container13 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 
10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  root@ah:~# 
  root@ah:~# dpkg -l | grep neutron
  ii  neutron-common  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - common
  ii  neutron-plugin-ml2  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - ML2 plugin
  ii  neutron-server  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - server
  ii  python-neutron  1:2015.1.2-0ubuntu1~cloud0
all  Neutron is a virtual network service for Openstack - Python 

  library
  ii  python-neutron-fwaas2015.1.2-0ubuntu2~cloud0  
all  Firewall-as-a-Service driver for OpenStack Neutron
  ii  python-neutronclient1:2.3.11-0ubuntu1.2~cloud0
all  client - Neutron is a virtual network service for Openstack
  root@ah:~#

  [Description and expect result]
  the container should be pinged via name space

  [Reproduceable or not]
  reproducible intermittently when large number of tenant/network/instance 
configed

  [Recreate Steps]
  1)use script to create: 860 tenants, 1 network/router in each tenant, 1 
cirros container in each network, all containers are associate to FIP

  2)create one more tenant, 1 network/contaier in the tenant, the
  container can be in Active state, but can not be pinged via name space
  ISSUE

  
  [Configration]
  config files on controller/network/compute are attached

  [logs]
  instance can be in Active state:
  oot@ah:~# nova --os-tenant-id 73731bbaf2db48f89a067604e3556e05 list
  
+--+-+++-++
  | ID   | Name| Status 
| Task State | Power State | Networks   
|
  
+--+-+++-++
  | d5ba18d5-aaf9-4ed6-9a2b-71d2b2f10bae | mexico_test_new_2_1_net1_vm | ACTIVE 
| -  | Running | mexico_test_new_2_1_net1=10.10.32.3, 172.168.6.211 
|
  
+--+-+++-++
  root@ah:~# keystone tenant-list | grep test_new_2_1
  | 73731bbaf2db48f89a067604e3556e05 | mexico_test_new_2_1 |   True  |
  root@ah:~# neutron net-list | grep exico_test_new_2_1_net1
  | a935642d-b56c-4a87-83c5-755f01bf0814 | mexico_test_new_2_1_net1 | 
bed0330f-e0ea-4bcc-bc75-96766dad32a7 10.10.32.0/24  |
  root@ah:~#

  on network node:
  root@ah:~# ip netns | grep a935642d-b56c-4a87-83c5-755f01bf0814
  qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814
  root@ah:~# ip netns exec qdhcp-a935642d-b56c-4a87-83c5-755f01bf0814 ping 
10.10.32.3
  PING 10.10.32.3 (10.10.32.3) 56(84) bytes of data.
  From 10.10.32.2 icmp_seq=1 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=2 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=3 Destination Host Unreachable
  From 10.10.32.2 icmp_seq=4 Destination Host Unreachable>>>ISSUE

  [Root cause anlyze or debug inf]
  high load on controller and network node

  [Attachment]
  log files on controller/network/compute are attached

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1513048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1511004] [NEW] _process_router_update can return without logging said return

2015-10-28 Thread Ryan Moats
Public bug reported:

In the L3 agent, _process_router_update process logs the start of each router 
update.
The execution branches that call process_prefix_update and _safe_router_removed 
immediately
continue without logging that the router update is finished.  This leads to 
confusion about when
processing of the last router stops.

** Affects: neutron
 Importance: Low
 Assignee: Ryan Moats (rmoats)
 Status: In Progress


** Tags: kilo-backport-potential liberty-backport-potential logging

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511004

Title:
  _process_router_update can return without logging said return

Status in neutron:
  In Progress

Bug description:
  In the L3 agent, _process_router_update process logs the start of each router 
update.
  The execution branches that call process_prefix_update and 
_safe_router_removed immediately
  continue without logging that the router update is finished.  This leads to 
confusion about when
  processing of the last router stops.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509008] [NEW] stable/kilo FixedIntervalLoopingCall error message not useful

2015-10-22 Thread Ryan Moats
Public bug reported:

The error message printed when a subclass of FixedIntervalLoopingCall exceeds 
its scheduled interval
shows information about the python object rather than the name of the function 
that is exceeding its
schedule.  Showing python object information is not useful to operators.

example from log:

2015-10-22 02:18:14.005 37767 WARNING
neutron.openstack.common.loopingcall [req-4f447ecc-0ea4-4651-883b-
1f7dab14beba ] task > run outlasted interval by
20.02 sec

This class is not present in stable/liberty or master, so this applies
to only stable/kilo.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: logging

** Summary changed:

- stable/kilo loopingcall.py error message not useful
+ stable/kilo FixedIntervalLoopingCall error message not useful

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509008

Title:
  stable/kilo FixedIntervalLoopingCall error message not useful

Status in neutron:
  New

Bug description:
  The error message printed when a subclass of FixedIntervalLoopingCall exceeds 
its scheduled interval
  shows information about the python object rather than the name of the 
function that is exceeding its
  schedule.  Showing python object information is not useful to operators.

  example from log:

  2015-10-22 02:18:14.005 37767 WARNING
  neutron.openstack.common.loopingcall [req-4f447ecc-0ea4-4651-883b-
  1f7dab14beba ] task > run outlasted interval
  by 20.02 sec

  This class is not present in stable/liberty or master, so this applies
  to only stable/kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507585] Re: the neutron prompt inaccuracy information when delete the interface from a router

2015-10-19 Thread Ryan Moats
** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
   Status: New => Confirmed

** Summary changed:

- the neutron prompt inaccuracy information when  delete the interface from  a 
router
+ router-interface-delete information prompt is inaccurate

** Project changed: neutron => python-neutronclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507585

Title:
  router-interface-delete information prompt is inaccurate

Status in python-neutronclient:
  Confirmed

Bug description:
  reproduce step
  1 when I try to delete a interface from a router , the neutron ask a 
subnet-ID instead of interface ID , but the help prompt the direct me input a 
INTERFACE ID,
  I don't know what different with interface ID and subnet ID ?  but I think 
this should be  have a reproduce step consistent name or prompt
  [root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f
  usage: neutron router-interface-delete [-h] [--request-format {json,xml}]
     ROUTER INTERFACE<
  neutron router-interface-delete: error: too few arguments
  [root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f 6fcd183a-585b-434c-be45-bb8abbb946b5
  Unable to find subnet with name 
'6fcd183a-585b-434c-be45-bb8abbb946b5'<
  [root@nitinserver1 ~(keystone_admin)]# neutron router-interface-delete 
fe765595-3749-40df-82bf-5c985701080f 7ef8b18b-489f-4f9c-922b-685651fc6eb6
  Removed interface from router fe765595-3749-40df-82bf-5c985701080f.
  [root@nitinserver1 ~(keystone_admin)]# neutron   router-port-list
fe765595-3749-40df-82bf-5c985701080f
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | c46628a7-3448-43b5-bf58-5fb832e38c21 |  | fa:16:3e:b7:d7:7d | 
{"subnet_id": "7ab67bd0-7cb0-4e47-bd2e-0aa277ebc31c", "ip_address": "20.1.1.1"} 
|
  
+--+--+---+-+
  [root@nitinserver1 ~(keystone_admin)]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1507585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498315] Re: Suggest do not display the lbaas namespace interface ip when associate floating ip.

2015-10-19 Thread Ryan Moats
openstack/juno is security support

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498315

Title:
  Suggest do not display the lbaas namespace interface ip when associate
  floating ip.

Status in neutron:
  Won't Fix

Bug description:
  1. Create a lb pool and vip.
  2. associate vip to a floatingip,then you can see two IPs, one is lb vip, 
another one is lbaas namespace interface ip address,so suggest only display vip 
address since lbaas namespace interface ip is not seen for user.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498315/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1189909] Re: dhcp-agent does always provide IP address for instances with re-cycled IP addresses.

2015-10-14 Thread Ryan Moats
** Package changed: quantum (CentOS) => centos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1189909

Title:
  dhcp-agent does always provide IP address for instances with re-cycled
  IP addresses.

Status in neutron:
  Fix Released
Status in quantum package in Ubuntu:
  Confirmed
Status in CentOS:
  New

Bug description:
  Configuration: OpenStack Networking, OpenvSwitch Plugin (GRE tunnels), 
OpenStack Networking Security Groups
  Release: Grizzly

  Sometime when creating instances, the dnsmasq instance associated with
  the tenant l2 network does not have configuration for the requesting
  mac address:

  Jun 11 09:30:23 d7m88-cofgod dnsmasq-dhcp[10083]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45 no address available
  Jun 11 09:30:33 d7m88-cofgod dnsmasq-dhcp[10083]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45 no address available

  Restarting the quantum-dhcp-agent resolved the issue:

  Jun 11 09:30:41 d7m88-cofgod dnsmasq-dhcp[11060]: 
DHCPDISCOVER(tap98031044-d8) fa:16:3e:da:41:45
  Jun 11 09:30:41 d7m88-cofgod dnsmasq-dhcp[11060]: DHCPOFFER(tap98031044-d8) 
10.5.0.2 fa:16:3e:da:41:45

  The IP address (10.5.0.2) was re-cycled from an instance that was
  destroyed just prior to creation of this one.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.04
  Package: quantum-dhcp-agent 1:2013.1.1-0ubuntu1
  ProcVersionSignature: Ubuntu 3.8.0-23.34-generic 3.8.11
  Uname: Linux 3.8.0-23-generic x86_64
  ApportVersion: 2.9.2-0ubuntu8.1
  Architecture: amd64
  Date: Tue Jun 11 09:31:38 2013
  MarkForUpload: True
  PackageArchitecture: all
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  SourcePackage: quantum
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.quantum.dhcp.agent.ini: [deleted]
  modified.conffile..etc.quantum.rootwrap.d.dhcp.filters: [deleted]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1189909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257354] Re: Metering doesn't anymore respect the l3 agent binding

2015-10-14 Thread Ryan Moats
Removing icehouse project, and marking incomplete to start the 60 day
cleanup timer - if still valid, please change the status and take
ownership

** No longer affects: neutron/icehouse

** Changed in: neutron
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1257354

Title:
  Metering doesn't anymore respect the l3 agent binding

Status in neutron:
  Incomplete

Bug description:
  Since the old L3 mixin has been moved as a service plugin, the
  metering service plugin doesn't respect anymore the l3 agent binding.
  So instead of using the cast rpc method it uses the fanout_cast
  method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1257354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330199] Re: rpc_workers does not work with Qpid

2015-09-29 Thread Ryan Moats
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330199

Title:
  rpc_workers does not work with Qpid

Status in oslo.messaging:
  Fix Released

Bug description:
  After enable rpc_workers other than 0, restart the neutron-server, and
  found that No consumers will be ever created for q-plugin within Qpid.

  It does appear that the all sub processes of neutron-server are
  getting hanging within the step of self.connection.open() in
  impl_qpid.py reconnect method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.messaging/+bug/1330199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499893] [NEW] Native OVSDB DbSetCommand shows O(n) performance

2015-09-25 Thread Ryan Moats
Public bug reported:

Create 100 tenants each one with the following setup where each router
is scheduled to the same legacy node that has the L3 agent configured to
use the native OVSDB inerface.

tenant network --- router -- external network

Reference http://ibin.co/2GuI6plJvngR for graph of performance during set up of 
100 routers.
In the above graph, y-axis is time in seconds, and x-axis is pass through 
_ovs_add_port (two per router add).

DbSetCommand's performance increases with each router add.  To support
scale, this needs to be closer to O(1) and perform significantly better
than using ovs-vsctl via rootwrap daemon.

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: kilo-backport-potential liberty-rc-potential performance

** Changed in: neutron
   Importance: High => Medium

** Tags added: kilo-backport-potential liberty-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499893

Title:
  Native OVSDB DbSetCommand shows O(n) performance

Status in neutron:
  New

Bug description:
  Create 100 tenants each one with the following setup where each router
  is scheduled to the same legacy node that has the L3 agent configured
  to use the native OVSDB inerface.

  tenant network --- router -- external network

  Reference http://ibin.co/2GuI6plJvngR for graph of performance during set up 
of 100 routers.
  In the above graph, y-axis is time in seconds, and x-axis is pass through 
_ovs_add_port (two per router add).

  DbSetCommand's performance increases with each router add.  To support
  scale, this needs to be closer to O(1) and perform significantly
  better than using ovs-vsctl via rootwrap daemon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499893/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494958] Re: router_info _process_internal_ports execution time is O(n)

2015-09-23 Thread Ryan Moats
*** This bug is a duplicate of bug 1494959 ***
https://bugs.launchpad.net/bugs/1494959

further triage indicates this is a duplicate of
https://bugs.launchpad.net/neutron/+bug/1494959 so marking this as
invalid

** Changed in: neutron
   Status: In Progress => Invalid

** This bug has been marked a duplicate of bug 1494959
   router_info new_ports loop execution time is O(n)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494958

Title:
  router_info _process_internal_ports execution time is O(n)

Status in neutron:
  Invalid

Bug description:
  router_info's _process_internal_ports execution time increases as the
  number of routers scheduled to a network node increases.  Ideally,
  this execution time should be O(1) if possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498565] Re: remove imported library DEBUG statements from neutron logs

2015-09-23 Thread Ryan Moats
An analysis of the logs from a gate run reveals the following
information:

CategoryQ-agt.txt   Q-dhcp.txt  Q-l3.txtQ-lbaas.txt 
Q-lbaasv2.txt   Q-meta.txt  Q-metering.txt  Q-svc.txt   Total
Lines   1038683 241707  502911  3906222344060   5157879809  2718456
amqp0   0   0   0   0   0   0   0   0
iso 0   0   0   0   0   0   0   0   0
keystone0   0   0   0   0   0   0   21852   
21852
oslo_concurrency51642   20456   21986   46  4   12104   1204
53  107495
oslo_db 0   0   0   0   0   0   0   9   9
oslo_messaging  97636826467150  85  1200322 159 
23076
olso_service156 160220112674360 148516175235
15140
requests0   0   0   0   0   0   0   0   0
routes  0   0   0   0   0   0   0   0   0
sqlalchemy  0   0   0   0   0   0   0   0   0
stevedore   0   0   0   0   0   0   0   0   0
urllib3 0   0   0   0   0   0   0   0   0
0.05926832340.11950005590.057004122 0.7091653866
0.20197930720.33565592370.609462866 0.031038555 0.061642344

So, neutron generates about 2.7 M lines of logs of which dependent
libraries produce 6%.  Marking this bug as opinion for now


** Changed in: neutron
   Status: In Progress => Opinion

** Changed in: neutron
 Assignee: Ryan Moats (rmoats) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498565

Title:
  remove imported library DEBUG statements from neutron logs

Status in neutron:
  Opinion

Bug description:
  Today, neutron log files include debug statements from imported
  libraries that are not particularly useful.

  Selected examples:

  q-agt.log:
  

  2015-09-21 05:58:50.765 ^[[00;32mDEBUG oslo_rootwrap.client
  [^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mPopen for ['sudo',
  '/usr/local/bin/neutron-rootwrap-daemon',
  '/etc/neutron/rootwrap.conf'] command has been instantiated^[[00m
  ^[[00;33mfrom (pid=17560) _initialize /usr/local/lib/python2.7/dist-
  packages/oslo_rootwrap/client.py:76^[[00m

  2015-09-21 05:58:51.666 ^[[00;32mDEBUG
  oslo_messaging._drivers.amqpdriver [^[[01;36mreq-9ae26b0d-cfb4-4417
  -b54f-13970adc62ed ^[[00;36mNone None^[[00;32m]
  ^[[01;35m^[[00;32mMSG_ID is 6430fa63de2f46afa0d19e30db724fa7^[[00m
  ^[[00;33mfrom (pid=17560) _send /usr/local/lib/python2.7/dist-
  packages/oslo_messaging/_drivers/amqpdriver.py:392^[[00m

  2015-09-21 05:58:51.231 ^[[00;32mDEBUG oslo_messaging._drivers.amqp
  [^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mPool creating new
  connection^[[00m ^[[00;33mfrom (pid=17560) create
  /usr/local/lib/python2.7/dist-
  packages/oslo_messaging/_drivers/amqp.py:103^[[00m

  q-l3.log:

  2015-08-27 11:19:18.027 ^[[00;32mDEBUG oslo_concurrency.lockutils
  [^[[01;36mreq-c6a9f3dc-8254-4634-ae2b-0c5b916a70bc ^[[00;36mNone
  None^[[00;32m] ^[[01;35m^[[00;32mAcquired semaphore
  "singleton_lock"^[[00m ^[[00;33mfrom (pid=20186) lock
  /usr/local/lib/python2.7/dist-
  packages/oslo_concurrency/lockutils.py:198^[[00m

  2015-08-27 11:19:18.111 ^[[00;32mDEBUG oslo_messaging._drivers.amqpdriver 
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mMSG_ID is 
e1322001df9c4886b94c8270d053f100^[[00m ^[[00;33mfrom (pid=20186) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:392^[[00m
  /

  2015-08-27 11:19:18.170 ^[[00;32mDEBUG oslo_service.loopingcall
  [^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mFixed interval looping call
  'neutron.agent.l3.agent.L3NATAgentWithStateReport._report_state'
  sleeping for 29.94 seconds^[[00m ^[[00;33mfrom (pid=20186) _run_loop
  /usr/local/lib/python2.7/dist-
  packages/oslo_service/loopingcall.py:121^[[00m

  q-svc.log:

  2015-09-18 11:59:24.674 ^[[00;32mDEBUG keystoneclient.session
  [^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mREQ: curl -g -i --cacert
  "/opt/stack/data/ca-bundle.pem" -X GET http://10.18.0.21:35357 -H
  "Accept: application/json" -H "User-Agent: neutron/7.0.0.0b4.dev217
  keystonemiddleware.auth_token/2.2.0"^[[00m ^[[00;33mfrom (pid=10920)
  _http_log_request /usr/local/lib/python2.7/dist-
  packages/keystoneclient/session.py:198^[[00m

  2015-09-18 11:59:25.782 ^[[00;32mDEBUG oslo_policy._cache_handler
  [^[[01;36mreq-c533f8b0-f260-4a2b-8e61-de2c1f801980 ^[[00;36mtempest-
  verify_tempest_config-176885932
  062fbda3c2cd4966bfacf772bf0f0ed2^[[00;32m] ^[[01;35m^[[00;32mReloading
  cached file /etc/neutron/policy.json^[[00m ^[[00;33mfrom (pid=10920)
  read_cached_file /usr/local/lib/python2.7/dist-
  packages/oslo_policy/

[Yahoo-eng-team] [Bug 1246953] Re: neutron database upgrade from grizzly to havana does not work

2015-09-23 Thread Ryan Moats
Marking as invalid as has been left incomplete for nearly two years.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1246953

Title:
  neutron database upgrade from grizzly to havana does not work

Status in neutron:
  Invalid

Bug description:
  When updating quantum/neutron database from grizzly/havana, the routers table 
is missing columns needed by neutron. As a result, 
  neutron router-list returns error. 

  2013-11-01 08:01:32.361 31533 TRACE neutron.api.v2.resource
  OperationalError: (OperationalError) (1054, "Unknown column
  'routers.enable_snat' in 'field list'") 'SELECT routers.tenant_id AS
  routers_tenant_id, routers.id AS routers_id, routers.name AS
  routers_name, routers.status AS routers_status, routers.admin_state_up
  AS routers_admin_state_up, routers.gw_port_id AS routers_gw_port_id,
  routers.enable_snat AS routers_enable_snat, routerroutes_1.destination
  AS routerroutes_1_destination, routerroutes_1.nexthop AS
  routerroutes_1_nexthop, routerroutes_1.router_id AS
  routerroutes_1_router_id \nFROM routers LEFT OUTER JOIN routerroutes
  AS routerroutes_1 ON routers.id = routerroutes_1.router_id' ()

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1246953/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498565] [NEW] remove imported library DEBUG statements from neutron logs

2015-09-22 Thread Ryan Moats
Public bug reported:

Today, neutron log files include debug statements from imported
libraries that are not particularly useful.

Selected examples:

q-agt.log:


2015-09-21 05:58:50.765 ^[[00;32mDEBUG oslo_rootwrap.client
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mPopen for ['sudo',
'/usr/local/bin/neutron-rootwrap-daemon', '/etc/neutron/rootwrap.conf']
command has been instantiated^[[00m ^[[00;33mfrom (pid=17560)
_initialize /usr/local/lib/python2.7/dist-
packages/oslo_rootwrap/client.py:76^[[00m

2015-09-21 05:58:51.666 ^[[00;32mDEBUG
oslo_messaging._drivers.amqpdriver [^[[01;36mreq-9ae26b0d-cfb4-4417
-b54f-13970adc62ed ^[[00;36mNone None^[[00;32m] ^[[01;35m^[[00;32mMSG_ID
is 6430fa63de2f46afa0d19e30db724fa7^[[00m ^[[00;33mfrom (pid=17560)
_send /usr/local/lib/python2.7/dist-
packages/oslo_messaging/_drivers/amqpdriver.py:392^[[00m

2015-09-21 05:58:51.231 ^[[00;32mDEBUG oslo_messaging._drivers.amqp
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mPool creating new
connection^[[00m ^[[00;33mfrom (pid=17560) create
/usr/local/lib/python2.7/dist-
packages/oslo_messaging/_drivers/amqp.py:103^[[00m

q-l3.log:

2015-08-27 11:19:18.027 ^[[00;32mDEBUG oslo_concurrency.lockutils [^[[01
;36mreq-c6a9f3dc-8254-4634-ae2b-0c5b916a70bc ^[[00;36mNone
None^[[00;32m] ^[[01;35m^[[00;32mAcquired semaphore
"singleton_lock"^[[00m ^[[00;33mfrom (pid=20186) lock
/usr/local/lib/python2.7/dist-
packages/oslo_concurrency/lockutils.py:198^[[00m

2015-08-27 11:19:18.111 ^[[00;32mDEBUG oslo_messaging._drivers.amqpdriver 
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mMSG_ID is 
e1322001df9c4886b94c8270d053f100^[[00m ^[[00;33mfrom (pid=20186) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:392^[[00m
/

2015-08-27 11:19:18.170 ^[[00;32mDEBUG oslo_service.loopingcall
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mFixed interval looping call
'neutron.agent.l3.agent.L3NATAgentWithStateReport._report_state'
sleeping for 29.94 seconds^[[00m ^[[00;33mfrom (pid=20186) _run_loop
/usr/local/lib/python2.7/dist-
packages/oslo_service/loopingcall.py:121^[[00m

q-svc.log:

2015-09-18 11:59:24.674 ^[[00;32mDEBUG keystoneclient.session
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mREQ: curl -g -i --cacert
"/opt/stack/data/ca-bundle.pem" -X GET http://10.18.0.21:35357 -H
"Accept: application/json" -H "User-Agent: neutron/7.0.0.0b4.dev217
keystonemiddleware.auth_token/2.2.0"^[[00m ^[[00;33mfrom (pid=10920)
_http_log_request /usr/local/lib/python2.7/dist-
packages/keystoneclient/session.py:198^[[00m

2015-09-18 11:59:25.782 ^[[00;32mDEBUG oslo_policy._cache_handler [^[[01
;36mreq-c533f8b0-f260-4a2b-8e61-de2c1f801980 ^[[00;36mtempest-
verify_tempest_config-176885932
062fbda3c2cd4966bfacf772bf0f0ed2^[[00;32m] ^[[01;35m^[[00;32mReloading
cached file /etc/neutron/policy.json^[[00m ^[[00;33mfrom (pid=10920)
read_cached_file /usr/local/lib/python2.7/dist-
packages/oslo_policy/_cache_handler.py:38^[[00m

2015-09-18 12:19:02.534 ^[[00;32mDEBUG oslo_concurrency.lockutils
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mLock "manager" acquired by
"neutron.manager._create_instance" :: waited 0.000s^[[00m ^[[00;33mfrom
(pid=14653) inner /usr/local/lib/python2.7/dist-
packages/oslo_concurrency/lockutils.py:253^[[00m

2015-09-18 12:19:02.778 ^[[00;32mDEBUG oslo_db.sqlalchemy.engines
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mMySQL server mode set to
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION^[[00m
^[[00;33mfrom (pid=14653) _check_effective_sql_mode
/usr/local/lib/python2.7/dist-
packages/oslo_db/sqlalchemy/engines.py:256^[[00m

2015-09-18 11:56:23.468 ^[[00;32mDEBUG oslo_messaging._drivers.amqp
[^[[01;36mreq-a46b008e-3fcc-4741-b138-0bcc8f7a8be7 ^[[00;36mNone
None^[[00;32m] ^[[01;35m^[[00;32mPool creating new connection^[[00m
^[[00;33mfrom (pid=10891) create /usr/local/lib/python2.7/dist-
packages/oslo_messaging/_drivers/amqp.py:103^[[00m

2015-09-18 11:56:24.600 ^[[00;32mDEBUG oslo_service.service
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mFull set of CONF:^[[00m
^[[00;33mfrom (pid=10891) wait /usr/local/lib/python2.7/dist-
packages/oslo_service/service.py:505^[[00m

** Affects: neutron
 Importance: Medium
 Assignee: Ryan Moats (rmoats)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498565

Title:
  remove imported library DEBUG statements from neutron logs

Status in neutron:
  In Progress

Bug description:
  Today, neutron log files include debug statements from imported
  libraries that are not particularly useful.

  Selected examples:

  q-agt.log:
  

  2015-09-21 05:58:50.765 ^[[00;32mDEBUG oslo_rootwrap.client
  [^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mPopen for ['sudo',
  '/usr/local/bin/neutron-rootwrap-daemon',
  '/etc/neutron/rootwrap.con

[Yahoo-eng-team] [Bug 1370868] Re: SDNVE plugin sets the tenant-name in the controller as the UUID instead of using the openstack project name

2015-09-22 Thread Ryan Moats
With the SDNVE plugin being removed from the repository, this bug is now
invalid and won't be fixed

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370868

Title:
  SDNVE plugin sets the tenant-name in the controller as the UUID
  instead of using the openstack project name

Status in neutron:
  Won't Fix

Bug description:
  During neutron network-create operation,  IBM SDN-VE plugin implicitly
  also creates the tenant in the sdn-ve controller.

  Its extract the tenant details using the keystone-client and issue a
  POST for the tenant creation.

  When this tenant gets created on the SDN-VE controller, the tenant-
  name is being set to UUID of the openstack-tenant instead of the
  actual project name.

  The name of the tenant with the controller should be same as the
  openstack project/tenant name

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370868/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312065] Re: IBM SDN VE Plugin has legacy Quantum naming

2015-09-22 Thread Ryan Moats
with the removal of SDNVE plugin from the repo, this defect is now
invalid

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1312065

Title:
  IBM SDN VE Plugin has legacy Quantum naming

Status in neutron:
  Won't Fix

Bug description:
  In sdnve neutron plugin, some constants are named start with "n_", while some 
are still named starting with "q_".
  Should keep the naming consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1312065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1229482] Re: REST API responses are not being translated

2015-09-22 Thread Ryan Moats
marking invalid as no updates in almost two years

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1229482

Title:
  REST API responses are not being translated

Status in neutron:
  Invalid

Bug description:
  The following REST API response is not being translated:

  GET /v2.0/networks/22c7b033-8a46-4c0d-88e2-f00582212a79

  When the Accept-Language header is some language other than English,
  that is installed in the system, the response is still in English.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1229482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1084355] Re: Tap interface does not automatically get an IP address upon a hypervisor reboot

2015-09-22 Thread Ryan Moats
marking invalid as it was incomplete and hasn't been updated in over a
year

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1084355

Title:
  Tap interface does not automatically get an IP address upon a
  hypervisor reboot

Status in neutron:
  Invalid

Bug description:
  A very simple configuration, in which there is a subnet and a net. No
  floating.

  The tap interface gets an IP address when a subnet is created. For
  instance for 172.16.10.0/24 subnet, tap interface gets an IP address
  of 172.16.10.2

  However, upon a reboot, tap interface does not always shows up, and
  does not automatically get an IP address. As a result, IP assignment
  to a new instance fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1084355/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497740] [NEW] nova API proxy to neutron should avoid race-ful behavior

2015-09-20 Thread Ryan Moats
Public bug reported:

Version of Nova/OpenStack: liberty

Calls to associate a floating IP with an instance currently returns a
202 status.  When proxying these calls to neutron, just returning 202
without having validated the status of the floating IP first leads to
raceful failures in tempest scenario tests (for example, see the
test_volume_boot_pattern failure in
http://logs.openstack.org/20/225420/1/check/gate-tempest-dsvm-neutron-
dvr/a78a052/logs/testr_results.html.gz)

Is it possible to (when proxying to neutron) first verify the status of
the neutron floating IP before returning the 202 status

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497740

Title:
  nova API proxy to neutron should avoid race-ful behavior

Status in OpenStack Compute (nova):
  New

Bug description:
  Version of Nova/OpenStack: liberty

  Calls to associate a floating IP with an instance currently returns a
  202 status.  When proxying these calls to neutron, just returning 202
  without having validated the status of the floating IP first leads to
  raceful failures in tempest scenario tests (for example, see the
  test_volume_boot_pattern failure in
  http://logs.openstack.org/20/225420/1/check/gate-tempest-dsvm-neutron-
  dvr/a78a052/logs/testr_results.html.gz)

  Is it possible to (when proxying to neutron) first verify the status
  of the neutron floating IP before returning the 202 status

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497450] [NEW] DNS lookup code in get_ports needs to be optimized

2015-09-18 Thread Ryan Moats
Public bug reported:

Kilo's get_ports code can be found at http://pastebin.com/PjVG2KFt while
Liberty's get_ports code can be found at http://pastebin.com/wpmTx8H7

The difference in the two code paths (the DNS code) leads to an
execution time difference shown in http://ibin.co/2G72PkX2eshD

** Affects: neutron
 Importance: High
 Status: New


** Tags: performance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497450

Title:
  DNS lookup code in get_ports needs to be optimized

Status in neutron:
  New

Bug description:
  Kilo's get_ports code can be found at http://pastebin.com/PjVG2KFt
  while Liberty's get_ports code can be found at
  http://pastebin.com/wpmTx8H7

  The difference in the two code paths (the DNS code) leads to an
  execution time difference shown in http://ibin.co/2G72PkX2eshD

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494963] Re: router_info process_floating_ip_addresses execution time is O(n)

2015-09-14 Thread Ryan Moats
Using 3.19 kernel appears to address this, so marking invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494963

Title:
  router_info process_floating_ip_addresses execution time is O(n)

Status in neutron:
  Invalid

Bug description:
  router_info's process_floating_ip_addresses execution time increases
  as the number of routers scheduled to a network node increases.
  Ideally, this execution time should be O(1) if possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494963/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379089] Re: get_sync_data is too convoluted

2015-09-11 Thread Ryan Moats
Marking this bug as invalid for Liberty as the re-factored solution
turns out to be as complex to maintain as the original code, which means
it isn't meeting the needs of the defect.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1379089

Title:
  get_sync_data is too convoluted

Status in neutron:
  Invalid

Bug description:
  l3_db get_sync_data is too convoluted, thus l3_dvr_db fully implements its 
own version.
  The method needs to be sort out to be easily extended by its subclass like 
l3_dvr_db

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1379089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494959] [NEW] router_info new_ports loop execution time is O(n)

2015-09-11 Thread Ryan Moats
Public bug reported:

router_info's looping through new_ports takes an increasing amount of
time as the number of routers scheduled to a network node increases.
Ideally, this execution time should be O(1) if possible.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494959

Title:
  router_info new_ports loop execution time is O(n)

Status in neutron:
  New

Bug description:
  router_info's looping through new_ports takes an increasing amount of
  time as the number of routers scheduled to a network node increases.
  Ideally, this execution time should be O(1) if possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494958] [NEW] router_info _process_internal_ports execution time is O(n)

2015-09-11 Thread Ryan Moats
Public bug reported:

router_info's _process_internal_ports execution time increases as the
number of routers scheduled to a network node increases.  Ideally, this
execution time should be O(1) if possible.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494958

Title:
  router_info _process_internal_ports execution time is O(n)

Status in neutron:
  New

Bug description:
  router_info's _process_internal_ports execution time increases as the
  number of routers scheduled to a network node increases.  Ideally,
  this execution time should be O(1) if possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494958/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494961] [NEW] router_info's _get_existing_devices execution time is O(n)

2015-09-11 Thread Ryan Moats
Public bug reported:

router_info's _get_existing_devices execution time increases as the
number of routers scheduled to a network node increases. Ideally, this
execution time should be O(1) if possible.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494961

Title:
  router_info's _get_existing_devices execution time is O(n)

Status in neutron:
  New

Bug description:
  router_info's _get_existing_devices execution time increases as the
  number of routers scheduled to a network node increases. Ideally, this
  execution time should be O(1) if possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1494963] [NEW] router_info process_floating_ip_addresses execution time is O(n)

2015-09-11 Thread Ryan Moats
Public bug reported:

router_info's process_floating_ip_addresses execution time increases as
the number of routers scheduled to a network node increases. Ideally,
this execution time should be O(1) if possible.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494963

Title:
  router_info process_floating_ip_addresses execution time is O(n)

Status in neutron:
  New

Bug description:
  router_info's process_floating_ip_addresses execution time increases
  as the number of routers scheduled to a network node increases.
  Ideally, this execution time should be O(1) if possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1494963/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493945] [NEW] Router scheduling at network node fails under scale

2015-09-09 Thread Ryan Moats
Public bug reported:

After around 100 routers being scheduled to a neutron node, subsequent
schedulings fail with the following extracted signature:

38343:2015-09-09 06:53:15.305 mDEBUG neutron.agent.l3.agent 
[req-d7ce10e2-b689-4c5b-b4c7-30aa4f1fdbbb admin 
cdd316b857a947488ca9120aef5f6891m] Got routers updated notification 
:[u'54ffc2c4-123b-460b-bd2f-01ae5277e3e1'] from (pid=19102) routers_updated 
/opt/stack/neutron/neutron/agent/l3/agent.py:385
38448:2015-09-09 06:53:16.328 mDEBUG neutron.agent.l3.agent 
[req-63d36e16-5d5d-4575-825b-28722ec28a1e admin 
cdd316b857a947488ca9120aef5f6891m] Got routers updated notification 
:[u'54ffc2c4-123b-460b-bd2f-01ae5277e3e1'] from (pid=19102) routers_updated 
/opt/stack/neutron/neutron/agent/l3/agent.py:385
41013:2015-09-09 06:54:23.815 mDEBUG neutron.agent.l3.agent [-m] Starting 
router update for 54ffc2c4-123b-460b-bd2f-01ae5277e3e1, action None, priority 0 
from (pid=19102) _process_router_update 
/opt/stack/neutron/neutron/agent/l3/agent.py:456
42690:2015-09-09 06:55:23.818 ERROR neutron.agent.l3.agent [-] Failed to fetch 
router information for '54ffc2c4-123b-460b-bd2f-01ae5277e3e1'
42710:2015-09-09 06:55:23.821 mDEBUG neutron.agent.l3.agent [-m] Starting 
router update for 54ffc2c4-123b-460b-bd2f-01ae5277e3e1, action None, priority 0 
from (pid=19102) _process_router_update 
/opt/stack/neutron/neutron/agent/l3/agent.py:456
42738:2015-09-09 06:55:30.615 mDEBUG oslo_messaging._drivers.amqpdriver [-m]  
queues: 8, message: {u'_unique_id': u'c3f0a880f9544bf8b938bb6ced4fee6f', 
u'failure': None, u'result': [{u'status': u'ACTIVE', u'_interfaces': 
[{u'allowed_address_pairs': [], u'extra_dhcp_opts': [], u'device_owner': 
u'network:router_interface', u'port_security_enabled': False, 
u'binding:profile': {}, u'fixed_ips': [{u'subnet_id': 
u'3d01720b-324d-4f69-8767-43705217aeb0', u'prefixlen': 24, u'ip_address': 
u'192.168.18.1'}], u'id': u'7ef5df56-e82b-4fb8-8b1c-836ec93338d3', 
u'security_groups': [], u'binding:vif_details': {}, u'binding:vif_type': 
u'unbound', u'mac_address': u'fa:16:3e:ee:9f:33', u'status': u'DOWN', 
u'subnets': [{u'ipv6_ra_mode': None, u'cidr': u'192.168.18.0/24', 
u'gateway_ip': u'192.168.18.1', u'id': u'3d01720b-324d-4f69-8767-43705217aeb0', 
u'subnetpool_id': None}], u'binding:host_id': u'legacy-network-1', 
u'device_id': u'54ffc2c4-123b-460b-bd2f-01ae5277e3e1', u'name': u'', 
u'admin_state_up': True
 , u'network_id': u'7a77e6c2-6e25-4223-9981-987f33e75d18', u'dns_name': u'', 
u'binding:vnic_type': u'normal', u'tenant_id': 
u'4cd3d0ecfa6f48bb946932481ef04b4e', u'extra_subnets': []}], u'enable_snat': 
True, u'ha_vr_id': 0, u'gw_port_host': None, u'gw_port_id': 
u'2a3dabbc-db24-40c5-880a-3ef738537520', u'admin_state_up': True, u'tenant_id': 
u'4cd3d0ecfa6f48bb946932481ef04b4e', u'gw_port': {u'allowed_address_pairs': [], 
u'extra_dhcp_opts': [], u'device_owner': u'network:router_gateway', 
u'port_security_enabled': False, u'binding:profile': {}, u'fixed_ips': 
[{u'subnet_id': u'd43deb2a-6bcd-40b2-b559-36a798e932ba', u'prefixlen': 20, 
u'ip_address': u'172.18.128.101'}], u'id': 
u'2a3dabbc-db24-40c5-880a-3ef738537520', u'security_groups': [], 
u'binding:vif_details': {}, u'binding:vif_type': u'unbound', u'mac_address': 
u'fa:16:3e:b5:fa:de', u'status': u'DOWN', u'subnets': [{u'ipv6_ra_mode': None, 
u'cidr': u'172.18.128.0/20', u'gateway_ip': u'172.18.128.1', u'id': 
u'd43deb2a-6bcd-40b2-b559-36a79
 8e932ba', u'subnetpool_id': None}], u'binding:host_id': u'legacy-network-1', 
u'device_id': u'54ffc2c4-123b-460b-bd2f-01ae5277e3e1', u'name': u'', 
u'admin_state_up': True, u'network_id': 
u'c546009b-207c-44cd-8a4b-3e1e426eb56b', u'dns_name': u'', 
u'binding:vnic_type': u'normal', u'tenant_id': u'', u'extra_subnets': []}, 
u'distributed': False, u'_snat_router_interfaces': [], u'routes': [], 
u'external_gateway_info': {u'network_id': 
u'c546009b-207c-44cd-8a4b-3e1e426eb56b', u'enable_snat': True, 
u'external_fixed_ips': [{u'subnet_id': u'd43deb2a-6bcd-40b2-b559-36a798e932ba', 
u'ip_address': u'172.18.128.101'}]}, u'ha': False, u'id': 
u'54ffc2c4-123b-460b-bd2f-01ae5277e3e1', u'name': u'router-100'}]} from 
(pid=19102) put 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:230
42921:2015-09-09 06:56:23.824 ERROR neutron.agent.l3.agent [-] Failed to fetch 
router information for '54ffc2c4-123b-460b-bd2f-01ae5277e3e1'

The failure above comes from oslo_messaging timing out while getting
router information at line 465 in _process_router_update.  However, the
status of the now unscheduled router is still show as ACTIVE by the
neutron server, so no one will know about the failure.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493945

Title:
  Router scheduling at network node fails under scale

Status in neutron:
  New

Bug description:
  After around 

[Yahoo-eng-team] [Bug 1489592] [NEW] Restarting OVS agent on network node breaks networking

2015-08-27 Thread Ryan Moats
Public bug reported:

Set up a three node devstack (compute+network+controller) running DVR on 
OVS+VXLAN
configure:

external net --- router --- internal net  instance

ping continuously from instance to 8.8.8.8

restart ovs agent on network node
after restart:  ping to 8.8.8.8 halts.  instance can not ping dhcp server 
address of network

at this point restarting dhcp and/or L3 agent does not help re-establish 
communication
pretty much only way out is to restack the network node.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1489592

Title:
  Restarting OVS agent on network node breaks networking

Status in neutron:
  New

Bug description:
  Set up a three node devstack (compute+network+controller) running DVR on 
OVS+VXLAN
  configure:

  external net --- router --- internal net  instance

  ping continuously from instance to 8.8.8.8

  restart ovs agent on network node
  after restart:  ping to 8.8.8.8 halts.  instance can not ping dhcp server 
address of network

  at this point restarting dhcp and/or L3 agent does not help re-establish 
communication
  pretty much only way out is to restack the network node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1489592/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488513] [NEW] The pipeline job gate-neutron-dsvm-api is showing increased timeouts in the FWaaS test

2015-08-25 Thread Ryan Moats
Public bug reported:

The pipeline job gate-neutron-dsvm-api is showing increased timeouts in
the _try_delete_firewall piece of the
test_firewall_insertion_mode_one_firewall_per_router test from
neutron.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488513

Title:
  The pipeline job gate-neutron-dsvm-api is showing increased timeouts
  in the FWaaS test

Status in neutron:
  New

Bug description:
  The pipeline job gate-neutron-dsvm-api is showing increased timeouts
  in the _try_delete_firewall piece of the
  test_firewall_insertion_mode_one_firewall_per_router test from
  neutron.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488513] Re: The pipeline job gate-neutron-dsvm-api is showing increased timeouts in the FWaaS test

2015-08-25 Thread Ryan Moats
Marking as invalid unless/until I see it more than once in the past 48
hours

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488513

Title:
  The pipeline job gate-neutron-dsvm-api is showing increased timeouts
  in the FWaaS test

Status in neutron:
  Invalid

Bug description:
  The pipeline job gate-neutron-dsvm-api is showing increased timeouts
  in the _try_delete_firewall piece of the
  test_firewall_insertion_mode_one_firewall_per_router test from
  neutron.tests.api.test_fwaas_extensions.FWaaSExtensionTestJSON.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1046121] Re: dhcp should never be enabled for a router external net

2015-04-08 Thread Ryan Moats
waking this bug up because while the solution was to document, there
should be a pointer to the document in question so that the issue is not
brought up in the future.

** Changed in: neutron
   Status: Expired = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1046121

Title:
  dhcp should never be enabled for a router external net

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  it doesn't make sense in the existing model, as the router IPs and the
  floating IPs allocated from an external net never make DHCP requests.

  I don't believe there is any significant additional harm caused by
  this though, other than unneeded CPU churn from DHCP agent and
  dnsmasq, and a burned IP address allocated for a DHCP port.

  One tricky issue is that DHCP is enabled by default, so the question
  is whether we should fail if the user does not explicitly disable it
  when creating a network, or if we should just automatically set it to
  False.  Unfortunately, I don't think we can tell the difference
  between a this value default to true and it being explicitly set to
  true, so it seems that if we want to prevent it from being set to true
  in the API, we should require it to be set to False.  We also need to
  prevent it from being set to True on an update.

  Another option would be to ignore the value set in the API and just
  have the DHCP agent ignore networks for which router:external =True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1046121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp