[Yahoo-eng-team] [Bug 1714769] Re: neutron tempest API test 'test_detail_quotas' fails with "internal server error"

2017-09-03 Thread Numan Siddique
The issue is not seen with neutron ml2ovs plugin because, floatingip is
a tracked resource.

I think for countable resources, this is a problem. Suppose loadbalancer
plugin is loaded, then we will see the same error.


** Changed in: neutron
 Assignee: Numan Siddique (numansiddique) => (unassigned)

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: networking-ovn
 Assignee: (unassigned) => Numan Siddique (numansiddique)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1714769

Title:
  neutron tempest API test 'test_detail_quotas' fails with  "internal
  server error"

Status in networking-ovn:
  In Progress
Status in neutron:
  New

Bug description:
  The neutron tempest API test -
  neutron.tests.tempest.api.admin.test_quotas.QuotasTest.test_detail_quotas
  calls the API - ""GET /v2.0/quotas/{tenant_id}/details" which is
  failing with the below logs in the neutron server

  INFO neutron.pecan_wsgi.hooks.translation [None 
req-64308681-f568-4dea-961b-5c9de579ac7e admin admin] GET failed (client 
error): The resource could not be found.
  INFO neutron.wsgi [None req-64308681-f568-4dea-961b-5c9de579ac7e admin admin] 
10.0.0.7 "GET /v2.0/quotas/ff5c5121117348df94aa181d3504375b/detail HTTP/1.1" 
status: 404  len: 309 time: 0.0295429
  ERROR neutron.api.v2.resource [None req-b1b677cd-73b1-435d-bcc4-845dfa713046 
admin admin] details failed: No details.: AttributeError: 'Ml2Plugin' object 
has no attribute 'get_floatingips'
  ERROR neutron.api.v2.resource Traceback (most recent call last):
  ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 98, in resource
  ERROR neutron.api.v2.resource result = method(request=request, **args)
  ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/extensions/quotasv2_detail.py", line 56, in details
  ERROR neutron.api.v2.resource self._get_detailed_quotas(request, id)}
  ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/extensions/quotasv2_detail.py", line 46, in 
_get_detailed_quotas
  ERROR neutron.api.v2.resource resource_registry.get_all_resources(), 
tenant_id)
  ERROR neutron.api.v2.resource   File "/opt/stack/neutron/neutron/db/api.py", 
line 163, in wrapped
  ERROR neutron.api.v2.resource return method(*args, **kwargs)
  ERROR neutron.api.v2.resource   File "/opt/stack/neutron/neutron/db/api.py", 
line 93, in wrapped
  ERROR neutron.api.v2.resource setattr(e, '_RETRY_EXCEEDED', True)
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  ERROR neutron.api.v2.resource self.force_reraise()
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb)
  ERROR neutron.api.v2.resource   File "/opt/stack/neutron/neutron/db/api.py", 
line 89, in wrapped
  ERROR neutron.api.v2.resource return f(*args, **kwargs)
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 150, in wrapper
  ERROR neutron.api.v2.resource ectxt.value = e.inner_exc
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  ERROR neutron.api.v2.resource self.force_reraise()
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb)
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 138, in wrapper
  ERROR neutron.api.v2.resource return f(*args, **kwargs)
  ERROR neutron.api.v2.resource   File "/opt/stack/neutron/neutron/db/api.py", 
line 128, in wrapped
  ERROR neutron.api.v2.resource LOG.debug("Retry wrapper got retriable 
exception: %s", e)
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  ERROR neutron.api.v2.resource self.force_reraise()
  ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb)
  ERROR neutron.api.v2.resource   File "/opt/stack/neutron/neutron/db/api.py", 
line 124, in wrapped
  ERROR neutron.api.v2.resource return f(*dup_args, **dup_kwargs)
  ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/quota/driver.py", line 98, in 
get_detailed_tenant_quotas
  ER

[Yahoo-eng-team] [Bug 1644788] [NEW] The newly added unit test case - test_create_subnet_check_mtu_in_mech_context in in ml2/test_plugin.py has broken networking-ovn

2016-11-25 Thread Numan Siddique
Public bug reported:

This test case added in the patch with change id - 
I3214a19e2374221b211ac7ab9b98842a1bdfc4a7 creates a vxlan network. OVN ML2 
driver doesn't support vxlan type driver.
This test case should be modified properly.

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Numan Siddique (numansiddique)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1644788

Title:
  The newly added unit test case -
  test_create_subnet_check_mtu_in_mech_context in  in ml2/test_plugin.py
  has broken networking-ovn

Status in neutron:
  New

Bug description:
  This test case added in the patch with change id - 
I3214a19e2374221b211ac7ab9b98842a1bdfc4a7 creates a vxlan network. OVN ML2 
driver doesn't support vxlan type driver.
  This test case should be modified properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1644788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644519] [NEW] Mechanism driver's create_subnet_(pre/post)commit functions when called by ML2 plugin doesn't have the correct MTU value in context.network.current

2016-11-24 Thread Numan Siddique
Public bug reported:

the mtu value present in context.network.current doesn't take the tunnel
encapsulation value into account.

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Numan Siddique (numansiddique)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1644519

Title:
  Mechanism driver's create_subnet_(pre/post)commit functions when
  called by ML2 plugin doesn't have the correct MTU value in
  context.network.current

Status in neutron:
  New

Bug description:
  the mtu value present in context.network.current doesn't take the
  tunnel encapsulation value into account.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1644519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550278] Re: tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* tempest tests are failing repeatedly in the gate for networking-ovn

2016-02-26 Thread Numan Siddique
** Summary changed:

- tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* tempest tests are 
failing repeatedly in the gate
+ tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* tempest tests are 
failing repeatedly in the gate for networking-ovn

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Numan Siddique (numansiddique)

** No longer affects: neutron

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550278

Title:
  tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* tempest tests
  are failing repeatedly in the gate for networking-ovn

Status in networking-ovn:
  In Progress
Status in neutron:
  In Progress

Bug description:
  We are seeing a lot of tempest failures for the tests 
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* 
  with the below error.

  Either we should fix the error or at least disable these tests
  temporarily.

  
  t156.9: 
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_stateless_no_ra[id-ae2f4a5d-03ff-4c42-a3b0-ce2fcb7ea832]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2016-02-26 07:29:46,168 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:test_dhcpv6_stateless_no_ra): 404 POST 
http://127.0.0.1:9696/v2.0/subnets 0.370s
  2016-02-26 07:29:46,169 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: {"subnet": {"cidr": "2003::/64", "ip_version": 6, "network_id": 
"4c7de56a-b059-4239-a5a0-94a53ba4929c", "gateway_ip": "2003::1", 
"ipv6_address_mode": "slaac"}}
  Response - Headers: {'content-length': '132', 'status': '404', 'date': 
'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-e21f771f-1a16-452a-9429-8a01f0409ae3'}
  Body: {"NeutronError": {"message": "Port 
598c23eb-1ae4-4010-a263-39f86240fd86 could not be found.", "type": 
"PortNotFound", "detail": ""}}
  2016-02-26 07:29:46,196 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET http://127.0.0.1:9696/v2.0/ports 
0.024s
  2016-02-26 07:29:46,197 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/ports', 'content-length': '13', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-f0966c23-c72f-4a6f-b113-5d88a6dd5912'}
  Body: {"ports": []}
  2016-02-26 07:29:46,250 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET 
http://127.0.0.1:9696/v2.0/subnets 0.052s
  2016-02-26 07:29:46,251 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/subnets', 'content-length': '457', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-3b29ba53-9ae0-4c0f-8c18-ec12db7a6bde'}
  Body: {"subnets": [{"name": "", "enable_dhcp": true, "network_id": 
"4c7de56a-b059-4239-a5a0-94a53ba4929c", "tenant_id": 
"631f9cb1391d41b6aba109afe06bc51b", "dns_nameservers": [], "gateway_ip": 
"2003::1", "ipv6_ra_mode": null, "allocation_pools": [{"start": "2003::2", 
"end": "2003:::::"}], "host_routes": [], "ip_version": 6, 
"ipv6_address_mode": "slaac", "cidr": "2003::/64", "id": 
"6bc2602c-2584-44cc-a6cd-b8af444f6403", "subnetpool_id": null}]}
  2016-02-26 07:29:46,293 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET 
http://127.0.0.1:9696/v2.0/routers 0.041s
  2016-02-26 07:29:46,293 4673 DEBUG[tempest.lib.common.rest_

[Yahoo-eng-team] [Bug 1475176] [NEW] SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools is not cleaning up the subnet pool at the end

2015-07-16 Thread Numan Siddique
Public bug reported:

API test -
tests.api.test_subnetpools.SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools
is not cleaning up the subnet pool at the end

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475176

Title:
  SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools is
  not cleaning up the subnet pool at the end

Status in neutron:
  In Progress

Bug description:
  API test -
  
tests.api.test_subnetpools.SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools
  is not cleaning up the subnet pool at the end

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475176/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475175] [NEW] SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools is not cleaning up the subnet pool at the end

2015-07-16 Thread Numan Siddique
*** This bug is a duplicate of bug 1475176 ***
https://bugs.launchpad.net/bugs/1475176

Public bug reported:

API test -
tests.api.test_subnetpools.SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools
is not cleaning up the subnet pool at the end

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1475175

Title:
  SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools is
  not cleaning up the subnet pool at the end

Status in neutron:
  New

Bug description:
  API test -
  
tests.api.test_subnetpools.SubnetPoolsTestV6.test_create_dual_stack_subnets_from_subnetpools
  is not cleaning up the subnet pool at the end

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1475175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471787] Re: policies defined for the neutron extension resource attributes are not enforced

2015-07-09 Thread Numan Siddique
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1471787

Title:
  policies defined for the neutron extension resource attributes are not
  enforced

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Policies defined for the neutron extension resource attributes are not
  enforced.

  In the case of address scopes, even though the policy.json has  the
  below rules, neutron is allowing the tenant user to create a shared
  address scope.

  create_subnetpool: ,
  create_subnetpool:shared: rule:admin_only

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1471787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471787] [NEW] policies defined for the neutron extension resource attributes are not enforced

2015-07-06 Thread Numan Siddique
Public bug reported:

Policies defined for the neutron extension resource attributes are not
enforced.

In the case of address scopes, even though the policy.json has  the
below rules, neutron is allowing the tenant user to create a shared
address scope.

create_subnetpool: ,
create_subnetpool:shared: rule:admin_only

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1471787

Title:
  policies defined for the neutron extension resource attributes are not
  enforced

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Policies defined for the neutron extension resource attributes are not
  enforced.

  In the case of address scopes, even though the policy.json has  the
  below rules, neutron is allowing the tenant user to create a shared
  address scope.

  create_subnetpool: ,
  create_subnetpool:shared: rule:admin_only

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1471787/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462154] Re: With DVR Pings to floating IPs replied with fixed-ips

2015-06-25 Thread Numan Siddique
** Changed in: neutron
   Status: Opinion = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462154

Title:
  With DVR Pings to floating IPs replied with fixed-ips

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  On my single node devstack setup, there are 2 VMs hosted.  VM1 has no 
floating IP assigned.  VM2 has a floating IP assigned.  From VM1, ping to VM2 
using the floating IP.  Ping output reports the replies comes from VM2's fixed 
ip address.
  The reply should be from VM2's floating ip address.

  This is a DVR problem as it doesn't happen when the L3 agent's mode is
  'legacy'.

  This may be a problem with the NAT rules defined by the DVR L3-agent.

  I used the latest neutron code on the master branch to reproduce, The
  agent_mode is set to 'dvr_snat'.

  
  Here is how the problem is reproduced:

  VM1 and VM2 runs on the same host.

  VM1 has fixed IP of 10.11.12.4, no floating-ip associated.
  VM2 has fixed IP of 10.11.12.5  floating-ip=10.127.10.226

  Logged into VM1 from the qrouter namespace.

  From VM1, ping to 10.127.10.226, ping output at VM1 reports
  ping replies are from the VM2's fixed IP address

  # ssh cirros@10.11.12.4
  cirros@10.11.12.4's password: 
  $ ping 10.127.10.226
  PING 10.127.10.226 (10.127.10.226): 56 data bytes
  64 bytes from 10.11.12.5: seq=0 ttl=64 time=4.189 ms
  64 bytes from 10.11.12.5: seq=1 ttl=64 time=1.254 ms
  64 bytes from 10.11.12.5: seq=2 ttl=64 time=2.386 ms
  64 bytes from 10.11.12.5: seq=3 ttl=64 time=2.064 ms
  ^C
  --- 10.127.10.226 ping statistics ---
  4 packets transmitted, 4 packets received, 0% packet loss
  round-trip min/avg/max = 1.254/2.473/4.189 ms
  $ 

  
  If I associate a floating IP on VM1 then repeat the same test, ping reports 
the replies comes from VM2's floating IP:

  # ssh cirros@10.11.12.4
  cirros@10.11.12.4's password: 
  $ ping 10.127.10.226
  PING 10.127.10.226 (10.127.10.226): 56 data bytes
  64 bytes from 10.127.10.226: seq=0 ttl=63 time=16.750 ms
  64 bytes from 10.127.10.226: seq=1 ttl=63 time=2.417 ms
  64 bytes from 10.127.10.226: seq=2 ttl=63 time=1.558 ms
  64 bytes from 10.127.10.226: seq=3 ttl=63 time=1.042 ms
  64 bytes from 10.127.10.226: seq=4 ttl=63 time=2.770 ms
  ^C
  --- 10.127.10.226 ping statistics ---
  5 packets transmitted, 5 packets received, 0% packet loss
  round-trip min/avg/max = 1.042/4.907/16.750 ms
  $

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462154] Re: With DVR Pings to floating IPs replied with fixed-ips

2015-06-24 Thread Numan Siddique
I tested it and i was able to reproduce.
In my setup VM1 is 10.0.0.3 and VM2 is 10.0.0.5 and with fip 172.168.1.9 - both 
hosted in the same compute node.

In the q-router namespace, there is a DNAT rule (shown below)

Chain neutron-l3-agent-PREROUTING (1 references)
 pkts bytes target prot opt in out source destination
0 0 REDIRECT   tcp  --  qr-+   *   0.0.0.0/0 169.254.169.254  
tcp dpt:80 redir ports 9697
   12  1008 DNAT   all  --  *  *   0.0.0.0/0 172.168.1.9  
to:10.0.0.5 

Because of which, the ping packet destined to the floating ip
(172.168.1.9) is not received by the snat namespace of the controller
node.

Below is the tcpdump of the q-router interface

15:48:51.418852 fa:16:3e:48:fa:e5  fa:16:3e:01:b5:31, ethertype IPv4 (0x0800), 
length 98: (tos 0x0, ttl 64, id 20248, offset 0, flags [DF], proto ICMP (1), 
length 84)
10.0.0.3  172.168.1.9: ICMP echo request, id 29185, seq 0, length 64
15:48:51.418920 fa:16:3e:01:b5:31  Broadcast, ethertype ARP (0x0806), length 
42: Ethernet (len 6), IPv4 (len 4), Request who-has 10.0.0.5 tell 10.0.0.1, 
length 28
15:48:51.419430 fa:16:3e:ef:ce:6b  fa:16:3e:01:b5:31, ethertype ARP (0x0806), 
length 42: Ethernet (len 6), IPv4 (len 4), Reply 10.0.0.5 is-at 
fa:16:3e:ef:ce:6b, length 28
15:48:51.419446 fa:16:3e:01:b5:31  fa:16:3e:ef:ce:6b, ethertype IPv4 (0x0800), 
length 98: (tos 0x0, ttl 63, id 20248, offset 0, flags [DF], proto ICMP (1), 
length 84)
10.0.0.3  10.0.0.5: ICMP echo request, id 29185, seq 0, length 64
15:48:52.418927 fa:16:3e:48:fa:e5  fa:16:3e:01:b5:31, ethertype IPv4 (0x0800), 
length 98: (tos 0x0, ttl 64, id 20480, offset 0, flags [DF], proto ICMP (1), 
length 84)
10.0.0.3  172.168.1.9: ICMP echo request, id 29185, seq 1, length 64
15:48:52.418996 fa:16:3e:01:b5:31  fa:16:3e:ef:ce:6b, ethertype IPv4 (0x0800), 
length 98: (tos 0x0, ttl 63, id 20480, offset 0, flags [DF], proto ICMP (1), 
length 84) 


I manually deleted the DNAT rule from iptables and it seems to work fine 
initially. But it had side effects.

I am not sure if its worth fixing it.

Thanks
Numan



** Changed in: neutron
   Status: In Progress = Opinion

** Changed in: neutron
 Assignee: Numan Siddique (numansiddique) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462154

Title:
  With DVR Pings to floating IPs replied with fixed-ips

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  On my single node devstack setup, there are 2 VMs hosted.  VM1 has no 
floating IP assigned.  VM2 has a floating IP assigned.  From VM1, ping to VM2 
using the floating IP.  Ping output reports the replies comes from VM2's fixed 
ip address.
  The reply should be from VM2's floating ip address.

  This is a DVR problem as it doesn't happen when the L3 agent's mode is
  'legacy'.

  This may be a problem with the NAT rules defined by the DVR L3-agent.

  I used the latest neutron code on the master branch to reproduce, The
  agent_mode is set to 'dvr_snat'.

  
  Here is how the problem is reproduced:

  VM1 and VM2 runs on the same host.

  VM1 has fixed IP of 10.11.12.4, no floating-ip associated.
  VM2 has fixed IP of 10.11.12.5  floating-ip=10.127.10.226

  Logged into VM1 from the qrouter namespace.

  From VM1, ping to 10.127.10.226, ping output at VM1 reports
  ping replies are from the VM2's fixed IP address

  # ssh cirros@10.11.12.4
  cirros@10.11.12.4's password: 
  $ ping 10.127.10.226
  PING 10.127.10.226 (10.127.10.226): 56 data bytes
  64 bytes from 10.11.12.5: seq=0 ttl=64 time=4.189 ms
  64 bytes from 10.11.12.5: seq=1 ttl=64 time=1.254 ms
  64 bytes from 10.11.12.5: seq=2 ttl=64 time=2.386 ms
  64 bytes from 10.11.12.5: seq=3 ttl=64 time=2.064 ms
  ^C
  --- 10.127.10.226 ping statistics ---
  4 packets transmitted, 4 packets received, 0% packet loss
  round-trip min/avg/max = 1.254/2.473/4.189 ms
  $ 

  
  If I associate a floating IP on VM1 then repeat the same test, ping reports 
the replies comes from VM2's floating IP:

  # ssh cirros@10.11.12.4
  cirros@10.11.12.4's password: 
  $ ping 10.127.10.226
  PING 10.127.10.226 (10.127.10.226): 56 data bytes
  64 bytes from 10.127.10.226: seq=0 ttl=63 time=16.750 ms
  64 bytes from 10.127.10.226: seq=1 ttl=63 time=2.417 ms
  64 bytes from 10.127.10.226: seq=2 ttl=63 time=1.558 ms
  64 bytes from 10.127.10.226: seq=3 ttl=63 time=1.042 ms
  64 bytes from 10.127.10.226: seq=4 ttl=63 time=2.770 ms
  ^C
  --- 10.127.10.226 ping statistics ---
  5 packets transmitted, 5 packets received, 0% packet loss
  round-trip min/avg/max = 1.042/4.907/16.750 ms
  $

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1462154/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team

[Yahoo-eng-team] [Bug 1443480] [NEW] Some of the neutron functional tests are failing with import error after unit test tree reorganization

2015-04-13 Thread Numan Siddique
Public bug reported:

Some of the neutron functional tests are failing with import error after
unit test tree reorganization


 Traceback (most recent call last):
ImportError: Failed to import test module: 
neutron.tests.functional.scheduler.test_dhcp_agent_scheduler
Traceback (most recent call last):
  File /usr/lib64/python2.7/unittest/loader.py, line 254, in _find_tests
module = self._get_module_from_name(name)
  File /usr/lib64/python2.7/unittest/loader.py, line 232, in 
_get_module_from_name
__import__(name)
  File neutron/tests/functional/scheduler/test_dhcp_agent_scheduler.py, 
line 24, in module
from neutron.tests.unit import test_dhcp_scheduler as test_dhcp_sch
ImportError: cannot import name test_dhcp_scheduler

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443480

Title:
  Some of the neutron functional tests are failing with import error
  after unit test tree reorganization

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Some of the neutron functional tests are failing with import error
  after unit test tree reorganization

  
   Traceback (most recent call last):
  ImportError: Failed to import test module: 
neutron.tests.functional.scheduler.test_dhcp_agent_scheduler
  Traceback (most recent call last):
File /usr/lib64/python2.7/unittest/loader.py, line 254, in _find_tests
  module = self._get_module_from_name(name)
File /usr/lib64/python2.7/unittest/loader.py, line 232, in 
_get_module_from_name
  __import__(name)
File neutron/tests/functional/scheduler/test_dhcp_agent_scheduler.py, 
line 24, in module
  from neutron.tests.unit import test_dhcp_scheduler as test_dhcp_sch
  ImportError: cannot import name test_dhcp_scheduler

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443479] [NEW] Some of the neutron functional tests are failing with import error after unit test tree reorganization

2015-04-13 Thread Numan Siddique
Public bug reported:

Some of the neutron functional tests are failing with import error after
unit test tree reorganization


 Traceback (most recent call last):
ImportError: Failed to import test module: 
neutron.tests.functional.scheduler.test_dhcp_agent_scheduler
Traceback (most recent call last):
  File /usr/lib64/python2.7/unittest/loader.py, line 254, in _find_tests
module = self._get_module_from_name(name)
  File /usr/lib64/python2.7/unittest/loader.py, line 232, in 
_get_module_from_name
__import__(name)
  File neutron/tests/functional/scheduler/test_dhcp_agent_scheduler.py, 
line 24, in module
from neutron.tests.unit import test_dhcp_scheduler as test_dhcp_sch
ImportError: cannot import name test_dhcp_scheduler

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443479

Title:
  Some of the neutron functional tests are failing with import error
  after unit test tree reorganization

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Some of the neutron functional tests are failing with import error
  after unit test tree reorganization

  
   Traceback (most recent call last):
  ImportError: Failed to import test module: 
neutron.tests.functional.scheduler.test_dhcp_agent_scheduler
  Traceback (most recent call last):
File /usr/lib64/python2.7/unittest/loader.py, line 254, in _find_tests
  module = self._get_module_from_name(name)
File /usr/lib64/python2.7/unittest/loader.py, line 232, in 
_get_module_from_name
  __import__(name)
File neutron/tests/functional/scheduler/test_dhcp_agent_scheduler.py, 
line 24, in module
  from neutron.tests.unit import test_dhcp_scheduler as test_dhcp_sch
  ImportError: cannot import name test_dhcp_scheduler

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386648] Re: Subnet deleting traceback with VPNaaS

2015-03-20 Thread Numan Siddique
** Changed in: neutron
   Status: In Progress = Invalid

** Changed in: neutron
 Assignee: Numan Siddique (numansiddique) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386648

Title:
  Subnet deleting traceback with VPNaaS

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  When trying delete subnet which is part of VPN service:
  1) subnet is not deleted
  2) Traceback in neutron:
  2014-10-28 11:15:27.481 TRACE neutron.api.v2.resource DBError: 
(IntegrityError) (1451, 'Cannot delete or update a parent row: a foreign key 
constraint fails (`neutron`.`vpnservices`, CONSTRAINT `vpnservices_ibfk_1` 
FOREIGN KEY (`subnet_id`) REFERENCES `subnets` (`id`))') 'DELETE FROM subnets 
WHERE subnets.id = %s' ('8ab1d17f-db00-4904-aebe-61edf0f627f0',)
  3) No error reporting about the reason

  Steps to reproduce:
  $ neutron net-create netto
  $ neutron subnet-create netto 192.168.12.0/24
  $ neutron router-create router-vpn
  $ neutron router-interface-add router-vpn 
8ab1d17f-db00-4904-aebe-61edf0f627f0 # subnet-id
  $ neutron router-gateway-set router-vpn public
  $ neutron vpn-ipsecpolicy-create policy2
  $ neutron vpn-service-create router-vpn 8ab1d17f-db00-4904-aebe-61edf0f627f0 
# subnet-id
  $ neutron router-interface-delete router-vpn 
8ab1d17f-db00-4904-aebe-61edf0f627f0 # subnet-id

  and now try to delete subnet:
  $ neutron subnet-delete 8ab1d17f-db00-4904-aebe-61edf0f627f0 # subnet-id

  Should be:
  1) Convenient error report if subnet is not deleted.
  2) If subnet is deleted, then VPN service should be deleted before. 

  The full traceback:

  2014-10-28 11:15:27.378 DEBUG neutron.plugins.ml2.plugin 
[req-3c9ecbda-6f86-4eff-92f2-b97e6ea34581 admin 
852452b6e465460b86edeeccfd10d165] Committing transacti
  on from (pid=4046) delete_subnet 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py:744
  2014-10-28 11:15:27.381 ERROR oslo.db.sqlalchemy.exc_filters 
[req-3c9ecbda-6f86-4eff-92f2-b97e6ea34581 admin 
852452b6e465460b86edeeccfd10d165] DBAPIError excep
  tion wrapped from (IntegrityError) (1451, 'Cannot delete or update a parent 
row: a foreign key constraint fails (`neutron`.`vpnservices`, CONSTRAINT 
`vpnservic
  es_ibfk_1` FOREIGN KEY (`subnet_id`) REFERENCES `subnets` (`id`))') 'DELETE 
FROM subnets WHERE subnets.id = %s' ('8ab1d17f-db00-4904-aebe-61edf0f627f0',)
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters Traceback (most 
recent call last):
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/compat/handle_error.py,
 line 59
  , in _handle_dbapi_exception
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters e, 
statement, parameters, cursor, context)
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1024, in 
_handle_dbapi_e
  xception
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters exc_info
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 196, in 
raise_from_cause
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters 
reraise(type(exception), exception, tb=exc_tb)
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 867, in 
_execute_context
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters context)
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 324, in 
do_execute
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in execute
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters 
self.errorhandler(self, exc, value)
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters   File 
/usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters raise 
errorclass, errorvalue
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters IntegrityError: 
(IntegrityError) (1451, 'Cannot delete or update a parent row: a foreign key 
const
  raint fails (`neutron`.`vpnservices`, CONSTRAINT `vpnservices_ibfk_1` FOREIGN 
KEY (`subnet_id`) REFERENCES `subnets` (`id`))') 'DELETE FROM subnets WHERE 
subne
  ts.id = %s' ('8ab1d17f-db00-4904-aebe-61edf0f627f0',)
  2014-10-28 11:15:27.381 TRACE oslo.db.sqlalchemy.exc_filters 
  2014-10-28 11:15:27.479 DEBUG neutron.openstack.common.lockutils 
[req-3c9ecbda-6f86-4eff-92f2-b97e6ea34581 admin

[Yahoo-eng-team] [Bug 1391806] Re: 'neutron port-list' is missing binding:vnic_type filter

2015-03-15 Thread Numan Siddique
** Changed in: neutron
 Assignee: Numan Siddique (numansiddique) = (unassigned)

** Changed in: neutron
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391806

Title:
  'neutron port-list' is missing binding:vnic_type filter

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  An example usage would be to filter the ports that have 
binding:vnic_type=direct
  # neutron port-list --binding:vnic_type=direct

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1391806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391806] Re: 'neutron port-list' is missing binding:vnic_type filter

2015-03-15 Thread Numan Siddique
** Changed in: neutron
   Status: Opinion = Invalid

** Changed in: neutron
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391806

Title:
  'neutron port-list' is missing binding:vnic_type filter

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  An example usage would be to filter the ports that have 
binding:vnic_type=direct
  # neutron port-list --binding:vnic_type=direct

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1391806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366067] Re: Neutron internal error on empty port update

2015-03-09 Thread Numan Siddique
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1366067

Title:
  Neutron internal error on empty port update

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  PUTting an empty update object to neutron port-update call will cause
  an internal server error:

  $ curl -H 'Content-Type: application/json' -H 'X-Auth-Token:  ...' -v -i -X 
PUT -d '{port: {}}' 
'http://127.0.1.1:9696/v2.0/ports/fc092916-c766-4e70-8788-b9b3edcd4c22' 
  * Hostname was NOT found in DNS cache
  *   Trying 127.0.1.1...
  * Connected to 127.0.1.1 (127.0.1.1) port 9696 (#0)
   PUT /v2.0/ports/fc092916-c766-4e70-8788-b9b3edcd4c22 HTTP/1.1
   User-Agent: curl/7.35.0
   Host: 127.0.1.1:9696
   Accept: */*
   Content-Type: application/json
   X-Auth-Token:  ...
   Content-Length: 12
   
  * upload completely sent off: 12 out of 12 bytes
   HTTP/1.1 500 Internal Server Error
  HTTP/1.1 500 Internal Server Error
   Content-Type: application/json; charset=UTF-8
  Content-Type: application/json; charset=UTF-8
   Content-Length: 88
  Content-Length: 88
   X-Openstack-Request-Id: req-97b2b096-263d-466c-9349-b45b135db499
  X-Openstack-Request-Id: req-97b2b096-263d-466c-9349-b45b135db499
   Date: Fri, 05 Sep 2014 14:43:28 GMT
  Date: Fri, 05 Sep 2014 14:43:28 GMT

   
  * Connection #0 to host 127.0.1.1 left intact
  {NeutronError: Request Failed: internal server error while processing your 
request.}

  The neutron log shows an invalid update SQL command:

  2014-09-05 14:43:28.751 2487 INFO neutron.wsgi [-] (2487) accepted
  ('127.0.0.1', 53273)

  2014-09-05 14:43:28.812 2487 ERROR 
neutron.openstack.common.db.sqlalchemy.session [-] DB exception wrapped.
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session Traceback (most recent call 
last):
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/db/sqlalchemy/session.py,
 line 597, in _wrap
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session return f(*args, **kwargs)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/db/sqlalchemy/session.py,
 line 836, in flush
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session return super(Session, 
self).flush(*args, **kwargs)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1818, in 
flush
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session self._flush(objects)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1936, in 
_flush
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session 
transaction.rollback(_capture_exception=True)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py, line 58, in 
__exit__
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session compat.reraise(exc_type, 
exc_value, exc_tb)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line 1900, in 
_flush
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session flush_context.execute()
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line 372, in 
execute
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session rec.execute(self)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py, line 525, in 
execute
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session uow
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 59, in 
save_obj
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session mapper, table, update)
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session   File 
/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py, line 495, in 
_emit_update_statements
  2014-09-05 14:43:28.812 2487 TRACE 
neutron.openstack.common.db.sqlalchemy.session execute(statement, params)
  2014-09-05 

[Yahoo-eng-team] [Bug 1423213] [NEW] ipsec site connection should be set to ERROR state if the peer address is fqdn and cannot be resolved

2015-02-18 Thread Numan Siddique
Public bug reported:

When creating ipsec site connection, if the peer address provided is a
fqdn and is not resolved, then the status of the ipsec site connection
will be in PENDING_CREATE. It should be set to ERROR state.

This bug is follow up of the bug
https://bugs.launchpad.net/neutron/+bug/1405413.

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1423213

Title:
  ipsec site connection should be set to ERROR state if the peer address
  is fqdn and cannot be resolved

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When creating ipsec site connection, if the peer address provided is a
  fqdn and is not resolved, then the status of the ipsec site connection
  will be in PENDING_CREATE. It should be set to ERROR state.

  This bug is follow up of the bug
  https://bugs.launchpad.net/neutron/+bug/1405413.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1423213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405146] Re: cannot create instance if security groups are disabled

2015-01-29 Thread Numan Siddique
** Changed in: nova
   Status: New = Invalid

** Changed in: nova
 Assignee: Numan Siddique (numansiddique) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1405146

Title:
  cannot create instance if security groups are disabled

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  2014.2.1 deployed by packstack on CentOS 7.

  I completely disabled security groups in both neutron (ml2 plugin) and
  nova:

  * /etc/neutron/plugin.ini
  enable_security_group = False

  * /etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini
  firewall_driver=neutron.agent.firewall.NoopFirewallDriver

  * /etc/nova/nova.conf
  security_group_api=neutron
  firewall_driver=nova.virt.firewall.NoopFirewallDriver

  [root@juno1 ~(keystone_admin)]# nova boot --flavor m1.small --image
  fedora-21 --nic net-id=5d37cd0b-7ad4-439e-a0f9-a4a430ff696b fedora-
  test

  From the nova-compute log instance creation fails with:

  2014-12-23 14:21:26.747 13009 ERROR nova.compute.manager [-] [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] Instance failed to spawn
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] Traceback (most recent call last):
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2243, in 
_build_resources
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] yield resources
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 2113, in 
_build_and_run_ins
  tance
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] block_device_info=block_device_info)
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2615, in 
spawn
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] admin_pass=admin_password)
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 3096, in 
_create_image
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] instance, network_info, admin_pass, 
files, suffix)
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2893, in 
_inject_data
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] network_info, 
libvirt_virt_type=CONF.libvirt.virt_type)
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/virt/netutils.py, line 87, in 
get_injected_network_t
  emplate
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] if not (network_info and template):
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/network/model.py, line 463, in __len__
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] return self._sync_wrapper(fn, *args, 
**kwargs)
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/network/model.py, line 450, in 
_sync_wrapper
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] self.wait()
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/nova/network/model.py, line 482, in wait
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] self[:] = self._gt.wait()
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7/site-packages/eventlet/greenthread.py, line 173, in wait
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9] return self._exit_event.wait()
  2014-12-23 14:21:26.747 13009 TRACE nova.compute.manager [instance: 
11d26eca-049c-415c-b74b-70a6e0ffb6c9]   File 
/usr/lib/python2.7

[Yahoo-eng-team] [Bug 1415891] [NEW] neutron-vpnaas test cases are failing

2015-01-29 Thread Numan Siddique
Public bug reported:

neutron-vpnaas unit test cases are failing because of this commit.
https://github.com/openstack/neutron/commit/47ddd2cc03528d9bd66a18d8fcc74ae26aa83497

The test cases needs to be updated accordingly

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1415891

Title:
  neutron-vpnaas  test cases are failing

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  neutron-vpnaas unit test cases are failing because of this commit.
  
https://github.com/openstack/neutron/commit/47ddd2cc03528d9bd66a18d8fcc74ae26aa83497

  The test cases needs to be updated accordingly

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1415891/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1409733] Re: adopt namespace-less oslo imports

2015-01-20 Thread Numan Siddique
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) = Numan Siddique (numansiddique)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1409733

Title:
  adopt namespace-less oslo imports

Status in Cinder:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Neutron:
  In Progress

Bug description:
  Oslo is migrating from oslo.* namespace to separate oslo_* namespaces
  for each library: https://blueprints.launchpad.net/oslo-
  incubator/+spec/drop-namespace-packages

  We need to adopt to the new paths in neutron. Specifically, for
  oslo.config, oslo.middleware, oslo.i18n, oslo.serialization,
  oslo.utils.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1409733/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412770] [NEW] Enable functional test job in openstack ci for neutron-vpnaas

2015-01-20 Thread Numan Siddique
Public bug reported:

Enable functional test job in openstack ci for neutron-vpnaas

** Affects: openstack-ci
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: New

** Project changed: neutron = openstack-ci

** Changed in: openstack-ci
 Assignee: (unassigned) = Numan Siddique (numansiddique)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412770

Title:
  Enable functional test job in openstack ci for neutron-vpnaas

Status in OpenStack Core Infrastructure:
  New

Bug description:
  Enable functional test job in openstack ci for neutron-vpnaas

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-ci/+bug/1412770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386046] Re: test_contrail_plugin.TestContrailL3NatTestCase fakes core plugin very trikerly

2014-12-30 Thread Numan Siddique
I am not sure if the fix is required here as the patch
https://review.openstack.org/#/c/124699/ has  a fix.


** Changed in: neutron
 Assignee: Numan Siddique (numansiddique) = (unassigned)

** Changed in: neutron
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386046

Title:
  test_contrail_plugin.TestContrailL3NatTestCase  fakes core plugin very
  trikerly

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  test_contrail_plugin.TestContrailL3NatTestCase fakes core plugin so
  trickery.

  It's the way is so trickery that work around for the test to pass is 
necessary.
  The test should be sorted out such that work around should be eliminated.
  https://review.openstack.org/#/c/124699/
  is the related patches. I hit this issue while I worked on the patch, but 
this is the generic issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1386046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391806] Re: 'neutron port-list' is missing binding:vnic_type filter

2014-12-15 Thread Numan Siddique
Moving this bug to neutron because 
1. It is not a client bug
2. When I call neutron port-list --binding:vnic_type=direct, 
binding:vnic_type=direct is passed as a filter to the neutron.
3. But neutron is not handling this filter. So this needs to be fixed in 
neutron and not neutron client.

Below is the q-svc log when the command is called

from (pid=2020) _http_log_response 
/usr/local/lib/python2.7/dist-packages/keystoneclient/session.py:190
2014-12-15 14:03:50.056 INFO neutron.wsgi 
[req-65df5322-5839-4828-8023-6a00b25daafc admin 
465fd00b678a426fb1bbceec638e28b2] 10.43.100.9 - - [15/Dec/2014 14:03:50] GET 
/v2.0/ports.json?binding%3Avnic_type=direct HTTP/1.1 200 5392 0.088472



** Project changed: python-neutronclient = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1391806

Title:
  'neutron port-list' is missing binding:vnic_type filter

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  An example usage would be to filter the ports that have 
binding:vnic_type=direct
  # neutron port-list --binding:vnic_type=direct

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1391806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402197] [NEW] neutron-vpnaas test code still has some neutron.services

2014-12-13 Thread Numan Siddique
Public bug reported:

neutron-vpnaas test code still has some neutron.services. instead
of neutron_vpnaas.services because of which vpn unit test cases are
failing when run locally.

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1402197

Title:
  neutron-vpnaas test code still has some neutron.services

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  neutron-vpnaas test code still has some neutron.services.
  instead of neutron_vpnaas.services because of which vpn unit test
  cases are failing when run locally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1402197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401895] [NEW] check-grenade-dsvm-neutron is failing since today

2014-12-12 Thread Numan Siddique
Public bug reported:

check-grenade-dsvm-neutron is failing after the services split. Below is
the screen log of q-svc

http://logs.openstack.org/44/141144/1/check/check-grenade-dsvm-
neutron/8d5bf1b/logs/new/screen-q-svc.txt.gz


2014-12-12 04:38:17.575 10352 INFO neutron.manager [-] Loading Plugin: 
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
2014-12-12 04:38:17.648 10352 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connecting to AMQP server on localhost:5672
2014-12-12 04:38:17.657 10352 INFO oslo.messaging._drivers.impl_rabbit [-] 
Connected to AMQP server on 127.0.0.1:5672
2014-12-12 04:38:17.667 10352 INFO neutron.db.l3_agentschedulers_db [-] 
Skipping period L3 agent status check because automatic router rescheduling is 
disabled.
2014-12-12 04:38:17.667 10352 DEBUG neutron.manager [-] Successfully loaded 
L3_ROUTER_NAT plugin. Description: L3 Router Service Plugin for basic L3 
forwarding between (L2) Neutron networks and access to external networks via a 
NAT gateway. _load_service_plugins /opt/stack/new/neutron/neutron/manager.py:193
2014-12-12 04:38:17.667 10352 INFO neutron.manager [-] Loading Plugin: 
neutron.services.loadbalancer.plugin.LoadBalancerPlugin
2014-12-12 04:38:17.743 10352 ERROR neutron.services.service_base [-] Error 
loading provider 
'neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver'
 for service LOADBALANCER
2014-12-12 04:38:17.743 10352 TRACE neutron.services.service_base Traceback 
(most recent call last):
2014-12-12 04:38:17.743 10352 TRACE neutron.services.service_base   File 
/opt/stack/new/neutron/neutron/services/service_base.py, line 80, in 
load_drivers
2014-12-12 04:38:17.743 10352 TRACE neutron.services.service_base 
provider['driver'], plugin
2014-12-12 04:38:17.743 10352 TRACE neutron.services.service_base   File 
/usr/local/lib/python2.7/dist-packages/oslo/utils/importutils.py, line 38, in 
import_object
2014-12-12 04:38:17.743 10352 TRACE neutron.services.service_base return 
import_class(import_str)(*args, **kwargs)
2014-12-12 04:38:17.743 10352 TRACE neutron.services.service_base   File 
/usr/local/lib/python2.7/dist-packages/oslo/utils/importutils.py, line 27, in 
import_class
2014-12-12 04:38:17.743 10352 TRACE neutron.services.service_base 
__import__(mod_str)
2014-12-12 04:38:17.743 10352 TRACE neutron.services.service_base ImportError: 
No module named drivers.haproxy.plugin_driver
2014-12-12 04:38:17.743 10352 TRACE neutron.services.service_base 
2014-12-12 04:38:17.744 10352 DEBUG neutron.openstack.common.lockutils [-] 
Releasing semaphore manager lock 
/opt/stack/new/neutron/neutron/openstack/common/lockutils.py:238
2014-12-12 04:38:17.744 10352 DEBUG neutron.openstack.common.lockutils [-] 
Semaphore / lock released _create_instance inner 
/opt/stack/new/neutron/neutron/openstack/common/lockutils.py:275
2014-12-12 04:38:17.744 10352 ERROR neutron.common.config [-] Unable to load 
neutron from configuration file /etc/neutron/api-paste.ini.
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config Traceback (most 
recent call last):
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config   File 
/opt/stack/new/neutron/neutron/common/config.py, line 189, in load_paste_app
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config app = 
deploy.loadapp(config:%s % config_path, name=app_name)
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config return 
loadobj(APP, uri, name=name, **kw)
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 272, in 
loadobj
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config return 
context.create()
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in create
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config return 
self.object_type.invoke(self)
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 144, in invoke
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config 
**context.local_conf)
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/util.py, line 55, in fix_call
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config val = 
callable(*args, **kw)
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/urlmap.py, line 28, in urlmap_factory
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config app = 
loader.get_app(app_name, global_conf=global_conf)
2014-12-12 04:38:17.744 10352 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 350, in 
get_app
2014-12-12 04:38:17.744 

[Yahoo-eng-team] [Bug 1397209] [NEW] HA router is also scheduled on agent(s) in dvr mode

2014-11-27 Thread Numan Siddique
Public bug reported:

When interface is added to an ha router, the router is also scheduled on the 
agent running in dvr mode.
ha router should be scheduled only on the agents running in legace, dvr_snat 
mode.
The issue is because of this 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_hamode_db.py#L253

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397209

Title:
  HA router is also scheduled on agent(s) in dvr mode

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When interface is added to an ha router, the router is also scheduled on the 
agent running in dvr mode.
  ha router should be scheduled only on the agents running in legace, dvr_snat 
mode.
  The issue is because of this 
https://github.com/openstack/neutron/blob/master/neutron/db/l3_hamode_db.py#L253

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1397209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377280] Re: Adding gateway to flat externat network breaks HA routers

2014-10-21 Thread Numan Siddique
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1377280

Title:
  Adding gateway to flat externat network breaks HA routers

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I am running Juno on Ubuntu 14.04. OpenStack is installed from source
  and updated to the latest this morning. I am trying to build HA
  routers using VXLAN tunnels. When I set up my external network as a
  VXXLAN network type everything works properly. If I delete the VXLAN
  based external network as the gateway to this network and change the
  external network to a flat network by adding the flat network as the
  gateway everything with the qrouter namespace disappears except for
  the lo interface.

  Here is my ml2_conf.ini file:
  [ml2]
  type_drivers = vxlan,flat
  tenant_network_types = vxlan,flat
  mechanism_drivers = linuxbridge,l2population
  [ml2_type_flat]
  flat_networks = physnet1
  [ml2_type_vlan]
  [ml2_type_gre]
  [ml2_type_vxlan]
  vni_ranges = 100:200
  vxlan_group = 224.0.0.1
  [securitygroup]
  firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  enable_security_group = True
  [agent]
  l2population = True
  tunnel_type = vxlan
  [linuxbridge]
  physical_interface_mappings = physnet1:vethOVS
  [l2pop]
  agent_boot_time = 180
  [vxlan]
  enable_vxlan = True
  vxlan_group = 224.0.0.1
  local_ip = 10.0.2.5
  l2_population = True

  
  contents of the qrouter namespace with external VXLAN network:
  root@network:~# ip netns exec qrouter-5e9b2a5f-4431-48a0-ad31-a46c987506cf ip 
a
  1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN group 
default 
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  2: ha-9c3955a7-32: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UP group default qlen 1000
  link/ether fa:16:3e:10:4d:61 brd ff:ff:ff:ff:ff:ff
  inet 169.254.192.12/18 brd 169.254.255.255 scope global ha-9c3955a7-32
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe10:4d61/64 scope link 
 valid_lft forever preferred_lft forever
  3: qr-88c5895b-17: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UP group default qlen 1000
  link/ether fa:16:3e:6d:54:62 brd ff:ff:ff:ff:ff:ff
  inet 10.2.0.1/28 scope global qr-88c5895b-17
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe6d:5462/64 scope link 
 valid_lft forever preferred_lft forever
  4: qr-b4a07ce5-34: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UP group default qlen 1000
  link/ether fa:16:3e:ae:b8:65 brd ff:ff:ff:ff:ff:ff
  inet 10.1.0.1/28 scope global qr-b4a07ce5-34
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:feae:b865/64 scope link 
 valid_lft forever preferred_lft forever
  5: qg-f454eff8-ea: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UP group default qlen 1000
  link/ether fa:16:3e:37:3e:a9 brd ff:ff:ff:ff:ff:ff
  inet 172.16.0.32/24 scope global qg-f454eff8-ea
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe37:3ea9/64 scope link 
 valid_lft forever preferred_lft forever

  after removing the router gateway:
  root@network:~# ip netns exec qrouter-a423edc7-5e12-4c15-a4eb-989c73cdb704 ip 
a
  1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN group 
default 
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  2: ha-87269710-52: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UP group default qlen 1000
  link/ether fa:16:3e:00:b6:9d brd ff:ff:ff:ff:ff:ff
  inet 169.254.192.14/18 brd 169.254.255.255 scope global ha-87269710-52
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe00:b69d/64 scope link 
 valid_lft forever preferred_lft forever
  3: qr-dc345670-13: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UP group default qlen 1000
  link/ether fa:16:3e:13:2b:52 brd ff:ff:ff:ff:ff:ff
  inet6 fe80::f816:3eff:fe13:2b52/64 scope link 
 valid_lft forever preferred_lft forever
  4: qr-bfe17de0-2c: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc 
pfifo_fast state UP group default qlen 1000
  link/ether fa:16:3e:04:ef:91 brd ff:ff:ff:ff:ff:ff
  inet6 fe80::f816:3eff:fe04:ef91/64 scope link 
 valid_lft forever preferred_lft forever

  Everything looks good up to this point.

  Building the following external 

[Yahoo-eng-team] [Bug 1376307] [NEW] nova compute is crashing with the error TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'

2014-10-01 Thread Numan Siddique
Public bug reported:

nova compute is crashing with the below error when nova compute is
started


2014-10-01 14:50:26.854 ^[[00;32mDEBUG nova.virt.libvirt.driver 
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mUpdating host stats^[[00m ^[[00;33mfrom 
(pid=9945) update_status /opt/stack/nova/nova/virt/libvirt/driver.py:6361^[[00m
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 449, 
in fire_timers
timer()
  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 
58, in __call__
cb(*args, **kw)
  File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 167, in 
_do_send
waiter.switch(result)
  File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
207, in main
result = function(*args, **kwargs)
  File /opt/stack/nova/nova/openstack/common/service.py, line 490, in 
run_service
service.start()
  File /opt/stack/nova/nova/service.py, line 181, in start
self.manager.pre_start_hook()
  File /opt/stack/nova/nova/compute/manager.py, line 1152, in pre_start_hook
self.update_available_resource(nova.context.get_admin_context())
  File /opt/stack/nova/nova/compute/manager.py, line 5946, in 
update_available_resource
nodenames = set(self.driver.get_available_nodes())
  File /opt/stack/nova/nova/virt/driver.py, line 1237, in get_available_nodes
stats = self.get_host_stats(refresh=refresh)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 5771, in 
get_host_stats
return self.host_state.get_host_stats(refresh=refresh)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 470, in host_state
self._host_state = HostState(self)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 6331, in __init__
self.update_status()
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 6387, in 
update_status
numa_topology = self.driver._get_host_numa_topology()
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 4828, in 
_get_host_numa_topology
for cell in topology.cells])
TypeError: unsupported operand type(s) for /: 'NoneType' and 'int'
2014-10-01 14:50:26.989 ^[[01;31mERROR nova.openstack.common.threadgroup 
[^[[00;36m-^[[01;31m] ^[[01;35m^[[01;31munsupported operand type(s) for /: 
'NoneType' and 'int'^[[00m


Seems like the commit 
https://github.com/openstack/nova/commit/6a374f21495c12568e4754800574e6703a0e626f
is the cause.

** Affects: nova
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: nova
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376307

Title:
  nova compute is crashing with the error TypeError: unsupported operand
  type(s) for /: 'NoneType' and 'int'

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  nova compute is crashing with the below error when nova compute is
  started

  
  2014-10-01 14:50:26.854 ^[[00;32mDEBUG nova.virt.libvirt.driver 
[^[[00;36m-^[[00;32m] ^[[01;35m^[[00;32mUpdating host stats^[[00m ^[[00;33mfrom 
(pid=9945) update_status /opt/stack/nova/nova/virt/libvirt/driver.py:6361^[[00m
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 
449, in fire_timers
  timer()
File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 
58, in __call__
  cb(*args, **kw)
File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 167, 
in _do_send
  waiter.switch(result)
File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
207, in main
  result = function(*args, **kwargs)
File /opt/stack/nova/nova/openstack/common/service.py, line 490, in 
run_service
  service.start()
File /opt/stack/nova/nova/service.py, line 181, in start
  self.manager.pre_start_hook()
File /opt/stack/nova/nova/compute/manager.py, line 1152, in pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
File /opt/stack/nova/nova/compute/manager.py, line 5946, in 
update_available_resource
  nodenames = set(self.driver.get_available_nodes())
File /opt/stack/nova/nova/virt/driver.py, line 1237, in 
get_available_nodes
  stats = self.get_host_stats(refresh=refresh)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 5771, in 
get_host_stats
  return self.host_state.get_host_stats(refresh=refresh)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 470, in host_state
  self._host_state = HostState(self)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 6331, in __init__
  self.update_status()
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 6387, in 
update_status
  numa_topology = self.driver

[Yahoo-eng-team] [Bug 1373872] Re: OpenContrail neutron plugin doesn't support portbinding.vnic_type

2014-10-01 Thread Numan Siddique
** This bug is no longer a duplicate of bug 1370077
   Set default vnic_type in neutron.

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373872

Title:
  OpenContrail neutron plugin doesn't support portbinding.vnic_type

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  OpenContrail neutron plugin is not supporting portbinding.vnic_type
  during port creation. Nova expects portbindings.vnic_type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373872] [NEW] OpenContrail neutron plugin doesn't support portbinding.vnic_type

2014-09-25 Thread Numan Siddique
Public bug reported:

OpenContrail neutron plugin is not supporting portbinding.vnic_type
during port creation. Nova expects portbindings.vnic_type.

** Affects: neutron
 Importance: Undecided
 Assignee: Numan Siddique (numansiddique)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Numan Siddique (numansiddique)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373872

Title:
  OpenContrail neutron plugin doesn't support portbinding.vnic_type

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  OpenContrail neutron plugin is not supporting portbinding.vnic_type
  during port creation. Nova expects portbindings.vnic_type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367685] Re: neutron port-create returning No more IP addresses available on network error when the subnet has two allocation pools with same start and end ipsNo more IP addres

2014-09-10 Thread Numan Siddique
Thanks Sridhar.


** Changed in: neutron
   Status: New = Invalid

** Changed in: neutron
 Assignee: Numan Siddique (numansiddique) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367685

Title:
  neutron port-create returning No more IP addresses available on
  network error when the subnet has two allocation pools with same
  start and end ipsNo more IP addresses available on network

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  neutron port-create returning No more IP addresses available on network 
error when the subnet has two allocation pools with same start and end ipsNo 
more IP addresses available on network.
  On running 'neutron port-list' the port is listed.

  
   neutron net-create test
  Created a new network:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | id  | 4151b7e5-7fdd-4975-a5f1-b45ee8a0ae73 |
  | name| test |
  | router:external | False|
  | shared  | False|
  | status  | ACTIVE   |
  | subnets |  |
  | tenant_id   | 5227a52545934d1ca0ad3b3fdb163863 |
  +-+--+
  ubuntu@oc-ovsvm:~$ neutron subnet-create test 30.0.0.0/24 --allocation-pool 
start=30.0.0.2,end=30.0.0.2 --allocation-pool start=30.0.0.5,end=30.0.0.5
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | {start: 30.0.0.2, end: 30.0.0.2} |
  |   | {start: 30.0.0.5, end: 30.0.0.5} |
  | cidr  | 30.0.0.0/24  |
  | dns_nameservers   |  |
  | enable_dhcp   | True |
  | gateway_ip| 30.0.0.1 |
  | host_routes   |  |
  | id| 41b9e7db-3be0-4fa0-954c-6693119ba6ce |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  |  |
  | network_id| 4151b7e5-7fdd-4975-a5f1-b45ee8a0ae73 |
  | tenant_id | 5227a52545934d1ca0ad3b3fdb163863 |
  +---+--+
  ubuntu@oc-ovsvm:~$ 
  ubuntu@oc-ovsvm:~$ 
  ubuntu@oc-ovsvm:~$ neutron port-create test
  Created a new port:
  
+---+-+
  | Field | Value   
|
  
+---+-+
  | admin_state_up| True
|
  | allowed_address_pairs | 
|
  | binding:vnic_type | normal  
|
  | device_id | 
|
  | device_owner  | 
|
  | fixed_ips | {subnet_id: 
41b9e7db-3be0-4fa0-954c-6693119ba6ce, ip_address: 30.0.0.2} |
  | id| dbb80785-47ae-4a79-89a8-657c667e9bd2
|
  | mac_address   | fa:16:3e:27:77:6b   
|
  | name  | 
|
  | network_id| 4151b7e5-7fdd-4975-a5f1-b45ee8a0ae73
|
  | security_groups   | ac026f4e-8b28-4523-88dd-4191c2420aae
|
  | status| DOWN
|
  | tenant_id | 5227a52545934d1ca0ad3b3fdb163863

[Yahoo-eng-team] [Bug 1303759] [NEW] neutron net-create is failing and apiSrv is throwing an exception

2014-04-07 Thread Numan Siddique
Public bug reported:

When I run ./stack.sh with localrc configured to run as a controller node, 
stack.sh fails with the below error
 neutron net-create --tenant-id a5ceeadae4c44781bfee71554f283362 private
2014-04-07 11:33:25.224 | ++ grep ' id '
2014-04-07 11:33:25.226 | ++ get_field 2
2014-04-07 11:33:25.228 | ++ read data
2014-04-07 11:33:26.119 | Request Failed: internal server error while 
processing your request.
2014-04-07 11:33:26.136 | + NET_ID=
2014-04-07 11:33:26.138 | + die_if_not_set 397 NET_ID 'Failure creating NET_ID 
for  a5ceeadae4c44781bfee71554f283362'
2014-04-07 11:33:26.140 | + local exitcode=0
2014-04-07 11:33:26.142 | [Call Trace]
2014-04-07 11:33:26.144 | ./stack.sh:1188:create_neutron_initial_network

screen-apiSrv.log has the below exception at the beginning

ubuntu@oc-comp2:~/devstack$ python 
/usr/local/lib/python2.7/dist-packages/vnc_cf 
^Mg_api_server/vnc_cfg_api_server.py --conf_file /etc/contrail/api_server.conf 
--r ^Mabbit_password contrail123  echo $! 
/opt/stack/status/contrail/apiSrv.pid; fg  ^M|| echo apiSrv failed to start 
| tee /opt/stack/status/contrail/apiSrv.failur ^Me
[1] 28773
bash: /opt/stack/status/contrail/apiSrv.pid: No such file or directory
python 
/usr/local/lib/python2.7/dist-packages/vnc_cfg_api_server/vnc_cfg_api_server.py 
--conf_file /etc/contrail/api_server.conf --rabbit_password contrail123
04/07/2014 11:32:37 AM [oc-comp2:ApiServer:Config:0]: Failed to import package 
sandesh
ERROR:oc-comp2:ApiServer:Config:0:Failed to import package sandesh
04/07/2014 11:32:37 AM [oc-comp2:ApiServer:Config:0]: Failed to import package 
sandesh
ERROR:oc-comp2:ApiServer:Config:0:Failed to import package sandesh
04/07/2014 11:32:37 AM [oc-comp2:ApiServer:Config:0]: SANDESH: Logging: LEVEL: 
[SYS_INFO] - [SYS_DEBUG]
INFO:oc-comp2:ApiServer:Config:0:SANDESH: Logging: LEVEL: [SYS_INFO] - 
[SYS_DEBUG]
04/07/2014 11:32:37 AM [oc-comp2:ApiServer:Config:0]: SANDESH: Logging: FILE: 
[stdout] - [/var/log/contrail/api.log]
INFO:oc-comp2:ApiServer:Config:0:SANDESH: Logging: FILE: [stdout] - 
[/var/log/contrail/api.log]
ERROR:stevedore.extension:Could not load 'xxx': No option 'admin_token' in 
section: 'KEYSTONE'
ERROR:stevedore.extension:No option 'admin_token' in section: 'KEYSTONE'
Traceback (most recent call last):
  File /opt/stack/stevedore/stevedore/extension.py, line 162, in _load_plugins
verify_requirements,
  File /opt/stack/stevedore/stevedore/extension.py, line 180, in 
_load_one_plugin
obj = plugin(*invoke_args, **invoke_kwds)
  File /usr/local/lib/python2.7/dist-packages/vnc_openstack/__init__.py, line 
41, in __init__
self._admin_token = conf_sections.get('KEYSTONE', 'admin_token')
  File /usr/lib/python2.7/ConfigParser.py, line 618, in get
raise NoOptionError(option, section)
NoOptionError: No option 'admin_token' in section: 'KEYSTONE'
Bottle v0.11.6 server starting up (using GeventServer())...
Listening on http://0.0.0.0:8084/
Hit Ctrl-C to quit.

.

Possible solution
---

By adding the below lines in the file /usr/local/lib/python2.7/dist-
packages/vnc_openstack/__init__.py seems to solve the problem at line
~26

try:
self._admin_token = conf_sections.get('KEYSTONE', 'admin_token')
except:
self._admin_token = None

I am not sure if this is the right solution, but this should be
addressed.

Thanks

** Affects: opencontrail
 Importance: Undecided
 Status: New

** Project changed: neutron = opencontrail

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1303759

Title:
  neutron net-create is failing and apiSrv is throwing an exception

Status in OpenContrail:
  New

Bug description:
  When I run ./stack.sh with localrc configured to run as a controller node, 
stack.sh fails with the below error
   neutron net-create --tenant-id a5ceeadae4c44781bfee71554f283362 private
  2014-04-07 11:33:25.224 | ++ grep ' id '
  2014-04-07 11:33:25.226 | ++ get_field 2
  2014-04-07 11:33:25.228 | ++ read data
  2014-04-07 11:33:26.119 | Request Failed: internal server error while 
processing your request.
  2014-04-07 11:33:26.136 | + NET_ID=
  2014-04-07 11:33:26.138 | + die_if_not_set 397 NET_ID 'Failure creating 
NET_ID for  a5ceeadae4c44781bfee71554f283362'
  2014-04-07 11:33:26.140 | + local exitcode=0
  2014-04-07 11:33:26.142 | [Call Trace]
  2014-04-07 11:33:26.144 | ./stack.sh:1188:create_neutron_initial_network

  screen-apiSrv.log has the below exception at the beginning

  ubuntu@oc-comp2:~/devstack$ python 
/usr/local/lib/python2.7/dist-packages/vnc_cf 
^Mg_api_server/vnc_cfg_api_server.py --conf_file /etc/contrail/api_server.conf 
--r ^Mabbit_password contrail123  echo $! 
/opt/stack/status/contrail/apiSrv.pid; fg  ^M|| echo apiSrv failed to start 
| tee /opt/stack/status/contrail/apiSrv.failur ^Me
  [1] 28773
  bash: /opt/stack/status/contrail/apiSrv.pid: No such file or directory
  python