[Yahoo-eng-team] [Bug 1533108] [NEW] [lbaasv2]The admin_state_up=False of listener didn't take effect if there were two listeners under one loadbalancer

2016-01-12 Thread Zou Keke
Public bug reported:

=
This issue exists on Master.

=
Reproduce steps:
1.Create a loadbalancer LB1
2.Create a listener Listener1 under LB1
3.Create another listener Listener2 under LB1
4.Create pools and members under Listener1 and Listener2
5.Change admin_state_up of Listener2 to False

After these operations, Listener2 can still be accessed.
And if I change the admin_state_up of both Listener1 and Listener2 to False, it 
works, neither of them can be accessed.


=
[root@controller ~]# neutron lbaas-loadbalancer-show lb1
+-++
| Field   | Value  |
+-++
| admin_state_up  | True   |
| description ||
| id  | ef1aaf1f-f1a0-4297-b2a2-19d3ade2cf61   |
| listeners   | {"id": "a1e46e6a-37e9-4a60-ab61-75905217948e"} |
| | {"id": "da1c0c06-bb78-4ff1-acfa-735d34b77557"} |
| name| lb1|
| operating_status| ONLINE |
| provider| haproxy|
| provisioning_status | ACTIVE |
| tenant_id   | f9969a4c930c410f8c508a54fa67f775   |
| vip_address | 192.168.1.11   |
| vip_port_id | f820d46b-96ee-431c-85c9-a50cb5af988e   |
| vip_subnet_id   | b4b63f9a-8cab-40b5-bdb7-db125b06be17   |
+-++
[root@controller ~]#

[root@controller ~]# neutron lbaas-listener-show 
a1e46e6a-37e9-4a60-ab61-75905217948e
+--++
| Field| Value  |
+--++
| admin_state_up   | True   |
| connection_limit | 0  |
| default_pool_id  | 34c9e4fd-4ee4-4b1d-b646-a28b2c4c732a   |
| default_tls_container_id ||
| description  ||
| id   | a1e46e6a-37e9-4a60-ab61-75905217948e   |
| loadbalancers| {"id": "ef1aaf1f-f1a0-4297-b2a2-19d3ade2cf61"} |
| name | listener1  |
| protocol | TCP|
| protocol_port| 22 |
| sni_container_ids||
| tenant_id| f9969a4c930c410f8c508a54fa67f775   |
+--++
[root@controller ~]#

[root@controller ~]# neutron lbaas-listener-show 
da1c0c06-bb78-4ff1-acfa-735d34b77557
+--++
| Field| Value  |
+--++
| admin_state_up   | False  |
| connection_limit | -1 |
| default_pool_id  | e5055294-fccd-4eb5-82fa-924906251fd4   |
| default_tls_container_id ||
| description  ||
| id   | da1c0c06-bb78-4ff1-acfa-735d34b77557   |
| loadbalancers| {"id": "ef1aaf1f-f1a0-4297-b2a2-19d3ade2cf61"} |
| name | listener2  |
| protocol | HTTP   |
| protocol_port| 80 |
| sni_container_ids||
| tenant_id| f9969a4c930c410f8c508a54fa67f775   |
+--++
[root@controller ~]#
[root@controller ~]# curl http://172.16.102.229:80/index.html
test
[root@controller ~]#

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

** Summary changed:

- [lbaasv2]The admin_state_up=False of listener didn't take effet if there were 
two listeners under one loadbalancer
+ [lbaasv2]The admin_state_up=False of listener didn't take effect if there 
were two listeners under one loadbalancer

-- 
You received this bug notification because 

[Yahoo-eng-team] [Bug 1528850] [NEW] log level of ProcessMonitor should not be ERROR

2015-12-23 Thread Zou Keke
Public bug reported:

https://github.com/openstack/neutron/blob/master/neutron/agent/linux/external_process.py#L230

I suppose the log level should be info or warning.

** Affects: neutron
 Importance: Undecided
 Assignee: Zou Keke (zoukeke)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Zou Keke (zoukeke)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1528850

Title:
  log level of ProcessMonitor should not be ERROR

Status in neutron:
  New

Bug description:
  
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/external_process.py#L230

  I suppose the log level should be info or warning.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1528850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527145] Re: when port updated on one compute node, ipset in other compute nodes did not be refreshed

2015-12-23 Thread Zou Keke
This bug has been fixed on Master.
https://review.openstack.org/#/c/177159/

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527145

Title:
  when port updated on one compute node, ipset in other compute nodes
  did not be refreshed

Status in neutron:
  Fix Released

Bug description:
  I found this problem in Kilo release, but I'm not sure if it still
  exists in master branch.

  =
  Reproduce steps:
  =
  (Three compute nodes,  ovs agent,   security group with ipset enabled)
  1. Launch VM1(1.1.1.1) on Compute Node1 with default security group
  2. Launch VM2(1.1.1.2) on Compute Node2 with default security group
  3. Launch VM3(1.1.1.3) on Compute Node3 with default security group
  4. Change VM1's ip address to 1.1.1.10 and port-update add allowed address 
pair 1.1.1.10

  After these operations, I found that ipset in Compute Node1 added
  member 1.1.1.10, but ipset in Compute Node2 and Compute Node3 did not,
  so that VM1 ping VM2 and VM3 failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527145] [NEW] when port updated on one compute node, ipset in other compute nodes did not be refreshed

2015-12-17 Thread Zou Keke
Public bug reported:

I found this problem in Kilo release, but I'm not sure if it still
exists in master branch.

=
Reproduce steps:
=
(Three compute nodes,  ovs agent,   security group with ipset enabled)
1. Launch VM1(1.1.1.1) on Compute Node1 with default security group
2. Launch VM2(1.1.1.2) on Compute Node2 with default security group
3. Launch VM3(1.1.1.3) on Compute Node3 with default security group
4. Change VM1's ip address to 1.1.1.10 and port-update add allowed address pair 
1.1.1.10

After these operations, I found that ipset in Compute Node1 added member
1.1.1.10, but ipset in Compute Node2 and Compute Node3 did not, so that
VM1 ping VM2 and VM3 failed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527145

Title:
  when port updated on one compute node, ipset in other compute nodes
  did not be refreshed

Status in neutron:
  New

Bug description:
  I found this problem in Kilo release, but I'm not sure if it still
  exists in master branch.

  =
  Reproduce steps:
  =
  (Three compute nodes,  ovs agent,   security group with ipset enabled)
  1. Launch VM1(1.1.1.1) on Compute Node1 with default security group
  2. Launch VM2(1.1.1.2) on Compute Node2 with default security group
  3. Launch VM3(1.1.1.3) on Compute Node3 with default security group
  4. Change VM1's ip address to 1.1.1.10 and port-update add allowed address 
pair 1.1.1.10

  After these operations, I found that ipset in Compute Node1 added
  member 1.1.1.10, but ipset in Compute Node2 and Compute Node3 did not,
  so that VM1 ping VM2 and VM3 failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523341] [NEW] Unable to add ipv6 static route

2015-12-06 Thread Zou Keke
Public bug reported:

On Liberty release.

When I add a ipv6 static route on Project->Network->Routers->Static
Route->Add Static Route, Horizon returns "Invalid version for IP
address".

The same ipv6 static route can be added with neutron cli.

# neutron -v router-update router0 -- --routes type=dict list=true
destination=2002::0/64,nexthop=2001::100

# neutron router-show router0
+---+--+
| Field | Value 

   |
+---+--+
| admin_state_up| True  

   |
| distributed   | False 

   |
| external_gateway_info | {"network_id": 
"63c39233-e44b-400b-b8c3-9a185568eedc", "enable_snat": true, 
"external_fixed_ips": [{"subnet_id": "f8f96fa1-6233-4eae-92f0-fca1848bb275", 
"ip_address": "172.16.207.5"}]} |
| ha| False 

   |
| id| b02c7c45-f807-47d8-8335-fbffb3a2b6b6  

   |
| name  | router0   

   |
| routes| {"destination": "2002::0/64", "nexthop": "2001::100"} 

   |
| status| ACTIVE

   |
| tenant_id | ace870e6790a4195b1b50fe69adbab69  

   |
+---+--+

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523341

Title:
  Unable to add ipv6 static route

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On Liberty release.

  When I add a ipv6 static route on Project->Network->Routers->Static
  Route->Add Static Route, Horizon returns "Invalid version for IP
  address".

  The same ipv6 static route can be added with neutron cli.

  # neutron -v router-update router0 -- --routes type=dict list=true
  destination=2002::0/64,nexthop=2001::100

  # neutron router-show router0
  
+---+--+
  | Field | Value   

 |
  
+---+--+
  | admin_state_up| True

 |
  | distributed   | False   

 

[Yahoo-eng-team] [Bug 1519664] [NEW] lb-pool status remains ACTIVE when admin_state_up is false.

2015-11-24 Thread Zou Keke
Public bug reported:

In Liberty release, I've set the lb-pool LB1 with admin_state_up=False,
but it's status remains ACTIVE.

=
[root@localhost ~(keystone_admin)]# 
[root@localhost ~(keystone_admin)]# neutron lb-pool-update 
256872f4-7ae8-43eb-9764-0c157e4fc2a8 --admin-state-up=False -v
DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://172.16.207.71:5000/v2.0 -H "Accept: application/json" -H "User-Agent: 
python-keystoneclient"
DEBUG: keystoneclient.session RESP: [200] content-length: 339 vary: 
X-Auth-Token connection: keep-alive date: Wed, 25 Nov 2015 07:14:35 GMT 
content-type: application/json x-openstack-request-id: 
req-280d970a-79ab-47a3-9dec-703ba09596ce 
RESP BODY: {"version": {"status": "stable", "updated": "2014-04-17T00:00:00Z", 
"media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"}], "id": "v2.0", "links": 
[{"href": "http://172.16.207.71:5000/v2.0/;, "rel": "self"}, {"href": 
"http://docs.openstack.org/;, "type": "text/html", "rel": "describedby"}]}}

DEBUG: neutronclient.neutron.v2_0.lb.pool.UpdatePool 
run(Namespace(id=u'256872f4-7ae8-43eb-9764-0c157e4fc2a8', 
request_format='json'))
DEBUG: keystoneclient.auth.identity.v2 Making authentication request to 
http://172.16.207.71:5000/v2.0/tokens
DEBUG: keystoneclient.session REQ: curl -g -i -X GET 
http://172.16.207.71:9696/v2.0/lb/pools.json?fields=id=256872f4-7ae8-43eb-9764-0c157e4fc2a8
 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H 
"X-Auth-Token: {SHA1}ac76e59978aff11fc3e265fe0eef096d0411a372"
DEBUG: keystoneclient.session RESP: [200] date: Wed, 25 Nov 2015 07:14:36 GMT 
connection: keep-alive content-type: application/json; charset=UTF-8 
content-length: 59 x-openstack-request-id: 
req-4022c95f-16a2-4d52-9de1-ca1da4db6c7d 
RESP BODY: {"pools": [{"id": "256872f4-7ae8-43eb-9764-0c157e4fc2a8"}]}

DEBUG: keystoneclient.session REQ: curl -g -i -X PUT 
http://172.16.207.71:9696/v2.0/lb/pools/256872f4-7ae8-43eb-9764-0c157e4fc2a8.json
 -H "User-Agent: python-neutronclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}ac76e59978aff11fc3e265fe0eef096d0411a372" -d '{"pool": {"admin_state_up": 
"False"}}'
DEBUG: keystoneclient.session RESP: [200] date: Wed, 25 Nov 2015 07:14:36 GMT 
connection: keep-alive content-type: application/json; charset=UTF-8 
content-length: 428 x-openstack-request-id: 
req-d76074df-41ff-4d96-bd01-3e3d2bc22a8e 
RESP BODY: {"pool": {"status": "PENDING_UPDATE", "lb_method": "ROUND_ROBIN", 
"protocol": "HTTP", "description": "", "health_monitors": [], "members": [], 
"status_description": null, "id": "256872f4-7ae8-43eb-9764-0c157e4fc2a8", 
"vip_id": null, "name": "LB1", "admin_state_up": false, "subnet_id": 
"e5a0cc1d-73a4-4262-9afe-cbed72aa67e9", "tenant_id": 
"ace870e6790a4195b1b50fe69adbab69", "health_monitors_status": [], "provider": 
"haproxy"}}

Updated pool: 256872f4-7ae8-43eb-9764-0c157e4fc2a8
[root@localhost ~(keystone_admin)]# 
[root@localhost ~(keystone_admin)]# neutron lb-pool-list
+--+--+--+-+--+++
| id   | name | provider | lb_method   | 
protocol | admin_state_up | status |
+--+--+--+-+--+++
| 256872f4-7ae8-43eb-9764-0c157e4fc2a8 | LB1  | haproxy  | ROUND_ROBIN | HTTP   
  | False  | ACTIVE |
+--+--+--+-+--+++
[root@localhost ~(keystone_admin)]# 
[root@localhost ~(keystone_admin)]# neutron lb-pool-show 
256872f4-7ae8-43eb-9764-0c157e4fc2a8
++--+
| Field  | Value|
++--+
| admin_state_up | False|
| description|  |
| health_monitors|  |
| health_monitors_status |  |
| id | 256872f4-7ae8-43eb-9764-0c157e4fc2a8 |
| lb_method  | ROUND_ROBIN  |
| members|  |
| name   | LB1  |
| protocol   | HTTP |
| provider   | haproxy  |
| status | ACTIVE   |
| status_description |  |
| subnet_id  | e5a0cc1d-73a4-4262-9afe-cbed72aa67e9 |
| tenant_id  | ace870e6790a4195b1b50fe69adbab69 |
| vip_id |  |