[Yahoo-eng-team] [Bug 1456512] [NEW] vpn and l3 agent has a conflict in icehouse.

2015-05-19 Thread yangzhenyu
Public bug reported:

The test step:
1. Create subnet named A and B.
2. Create router named A and B.
3. Add subnet A to router A, and set gateway for router A. then do same with B.
4. Create vpn A, the vpn subnet  use subnet A, peer gateway use router B's 
gateway, peer subnet use subnet B.
5. Create vpn B, the vpn subnet  use subnet B, peer gateway use router A's 
gateway, peer subnet use subnet A.

then test vpn, the subnet A and B can  communicate.

But after I restart l3 agent or create a firewall( not rule problems) in
the tenant, the subnet A and B can not communicate.

I find some issue in the qrouter A or B's iptables nat table:

vpn use one chain to prevent the SNAT, but after I restart l3 agent or
create a firewall,  the chain order has been changed.

like this:

Chain POSTROUTING (policy ACCEPT 19 packets, 1447 bytes)
pkts bytes target prot opt in out source destination
22 1699 neutron-l3-agent-POSTROUTING all – * * 0.0.0.0/0 0.0.0.0/0
28 2167 neutron-postrouting-bottom all – * * 0.0.0.0/0 0.0.0.0/0
26 1999 neutron-vpn-agen-POSTROUTING all – * * 0.0.0.0/0 0.0.0.0/0

Chain neutron-l3-agent-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all – !qg-bd458156-6e !qg-bd458156-6e 0.0.0.0/0 0.0.0.0/0 ! ctstate 
DNAT

Chain neutron-postrouting-bottom (1 references)
pkts bytes target prot opt in out source destination
22 1699 neutron-l3-agent-snat all – * * 0.0.0.0/0 0.0.0.0/0
25 1915 neutron-vpn-agen-snat all – * * 0.0.0.0/0 0.0.0.0/0

Chain neutron-l3-agent-snat (1 references)
pkts bytes target prot opt in out source destination
22 1699 neutron-l3-agent-float-snat all – * * 0.0.0.0/0 0.0.0.0/0
2 168 SNAT all – * * 111.111.111.0/24 0.0.0.0/0 to:12.12.12.54

Chain neutron-vpn-agen-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
1 84 ACCEPT all – * * 111.111.111.0/24 123.123.123.0/24 policy match dir out 
pol ipsec
0 0 ACCEPT all – !qg-bd458156-6e !qg-bd458156-6e 0.0.0.0/0 0.0.0.0/0 ! ctstate 
DNAT

Chain neutron-vpn-agen-POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
1 84 ACCEPT all – * * 111.111.111.0/24 123.123.123.0/24 policy match dir out 
pol ipsec
0 0 ACCEPT all – !qg-bd458156-6e !qg-bd458156-6e 0.0.0.0/0 0.0.0.0/0 ! ctstate 
DNAT

so the packet has to snat first, and the vpn  is failure.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: vpnaas

** Description changed:

  The test step:
- 1. Create subnet named A and B.
- 2. Create router named A and B.
- 3. Add subnet A to router A, and set gateway for router A. then do same 
with B.
- 4. Create vpn A, the vpn subnet  use subnet A, peer gateway use router 
B's gateway, peer subnet use subnet B. 
- 5. Create vpn B, the vpn subnet  use subnet B, peer gateway use router 
A's gateway, peer subnet use subnet A.
-
-  then test vpn, the subnet A and B can  communicate.
+ 1. Create subnet named A and B.
+ 2. Create router named A and B.
+ 3. Add subnet A to router A, and set gateway for router A. then do same with 
B.
+ 4. Create vpn A, the vpn subnet  use subnet A, peer gateway use router B's 
gateway, peer subnet use subnet B.
+ 5. Create vpn B, the vpn subnet  use subnet B, peer gateway use router A's 
gateway, peer subnet use subnet A.
  
-  But after I restart l3 agent or create a firewall( not rule
- problems) in the tenant, the subnet A and B can not communicate.
+ then test vpn, the subnet A and B can  communicate.
  
-  I find some issue in the qrouter A or B's iptables nat table:
+ But after I restart l3 agent or create a firewall( not rule problems) in
+ the tenant, the subnet A and B can not communicate.
  
-  vpn use one chain to prevent the SNAT, but after I restart l3 agent
- or create a firewall,  the chain order has been changed.
+ I find some issue in the qrouter A or B's iptables nat table:
  
-   like this:
+ vpn use one chain to prevent the SNAT, but after I restart l3 agent or
+ create a firewall,  the chain order has been changed.
  
+ like this:
  
  Chain POSTROUTING (policy ACCEPT 19 packets, 1447 bytes)
- pkts bytes target prot opt in out source destination 
- 22 1699 neutron-l3-agent-POSTROUTING all – * * 0.0.0.0/0 0.0.0.0/0 
- 28 2167 neutron-postrouting-bottom all – * * 0.0.0.0/0 0.0.0.0/0 
+ pkts bytes target prot opt in out source destination
+ 22 1699 neutron-l3-agent-POSTROUTING all – * * 0.0.0.0/0 0.0.0.0/0
+ 28 2167 neutron-postrouting-bottom all – * * 0.0.0.0/0 0.0.0.0/0
  26 1999 neutron-vpn-agen-POSTROUTING all – * * 0.0.0.0/0 0.0.0.0/0
  
  Chain neutron-l3-agent-POSTROUTING (1 references)
- pkts bytes target prot opt in out source destination 
+ pkts bytes target prot opt in out source destination
  0 0 ACCEPT all – !qg-bd458156-6e !qg-bd458156-6e 0.0.0.0/0 0.0.0.0/0 ! 
ctstate DNAT
  
  Chain neutron-postrouting-bottom (1 references)
- pkts bytes target prot opt in out source destination 
- 22 1699 neutron-l3-agent-snat all – * * 0.0.0.

[Yahoo-eng-team] [Bug 1399114] [NEW] when delete the lb vip, the tap device not be deleted

2014-12-04 Thread yangzhenyu
Public bug reported:

Hi all,
  When I delete the lb vip which is ERROR status, the lbaas namespace tap 
device not be delete. so when I add a new vip used the same ip address, then It 
can not access. Because the ip confilict.

   My neutron version is icehouse.

** Affects: neutron
 Importance: Undecided
 Assignee: yangzhenyu (cdyangzhenyu)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yangzhenyu (cdyangzhenyu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1399114

Title:
  when delete the lb vip, the tap device not be deleted

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi all,
When I delete the lb vip which is ERROR status, the lbaas namespace tap 
device not be delete. so when I add a new vip used the same ip address, then It 
can not access. Because the ip confilict.

 My neutron version is icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1399114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398267] [NEW] when restart the vpn and l3 agent, the firewall rule apply to all tenants' router.

2014-12-01 Thread yangzhenyu
Public bug reported:

Hi all:
   when restart the vpn and l3 agent, the firewall rule apply to all tenants' 
router.
   step:
   1. Create network and router in A and B tenant.
   2. Create a firewall in A tenant.
   3. Restart vpn and l3 agent serivce.
   4. ip netns exec qrouter-B_router_uuid iptables -L -t filter -vn 

Then I find the firewall rule in chain neutron-l3-agent-FORWARD and
neutron-vpn-agen-FORWARD.

So I  debug the code,and add some code in
neutron/services/firewall/agents/l3reference/firewall_l3_agent.py :

 def _process_router_add(self, ri):
"""On router add, get fw with rules from plugin and update driver."""
LOG.debug(_("Process router add, router_id: '%s'"), ri.router['id'])
routers = []
routers.append(ri.router)
router_info_list = self._get_router_info_list_for_tenant(
routers,
ri.router['tenant_id'])
if router_info_list:
# Get the firewall with rules
# for the tenant the router is on.
ctx = context.Context('', ri.router['tenant_id'])
fw_list = self.fwplugin_rpc.get_firewalls_for_tenant(ctx)
LOG.debug(_("Process router add, fw_list: '%s'"),
  [fw['id'] for fw in fw_list])
for fw in fw_list:
+if fw['tenant_id'] == ri.router['tenant_id']:
   self._invoke_driver_for_sync_from_plugin(
ctx,
router_info_list,
 fw)

My neutron version is icehouse.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Hi all:
-when restart the vpn and l3 agent, the firewall rule apply to all 
tenants' router. 
-step:
-1. Create network and router in A and B tenant.
-2. Create a firewall in A tenant.
-3. Restart vpn and l3 agent serivce.
-4. ip netns exec qrouter-B_router_uuid iptables -L -t filter -vn
- Then i find the firewall rule in chain neutron-l3-agent-FORWARD and 
neutron-vpn-agen-FORWARD.
+    when restart the vpn and l3 agent, the firewall rule apply to all tenants' 
router.
+    step:
+    1. Create network and router in A and B tenant.
+    2. Create a firewall in A tenant.
+    3. Restart vpn and l3 agent serivce.
+    4. ip netns exec qrouter-B_router_uuid iptables -L -t filter -vn Then I 
find the firewall rule in chain neutron-l3-agent-FORWARD and 
neutron-vpn-agen-FORWARD.
  
- so I  debug the code,and add some code in 
neutron/services/firewall/agents/l3reference/firewall_l3_agent.py :
- 
-  def _process_router_add(self, ri):
- """On router add, get fw with rules from plugin and update driver."""
- LOG.debug(_("Process router add, router_id: '%s'"), ri.router['id'])
- routers = []
- routers.append(ri.router)
- router_info_list = self._get_router_info_list_for_tenant(
- routers,
- ri.router['tenant_id'])
- if router_info_list:
- # Get the firewall with rules
- # for the tenant the router is on.
- ctx = context.Context('', ri.router['tenant_id'])
- fw_list = self.fwplugin_rpc.get_firewalls_for_tenant(ctx)
- LOG.debug(_("Process router add, fw_list: '%s'"),
-   [fw['id'] for fw in fw_list])
- for fw in fw_list:
- +++if fw['tenant_id'] == ri.router['tenant_id']:
-self._invoke_driver_for_sync_from_plugin(
- ctx,
- router_info_list,
-  fw)
+ so I  debug the code,and add some code in
+ neutron/services/firewall/agents/l3reference/firewall_l3_agent.py :
+ 
+  def _process_router_add(self, ri):
+ """On router add, get fw with rules from plugin and update driver."""
+ LOG.debug(_("Process router add, router_id: '%s'"), ri.router['id'])
+ routers = []
+ routers.append(ri.router)
+ router_info_list = self._get_router_info_list_for_tenant(
+ routers,
+ ri.router['tenant_id'])
+ if router_info_list:
+ # Get the firewall with rules
+ # for the tenant the router is on.
+ ctx = context.Context('', ri.router['tenant_id'])
+ fw_list = self.fwplugin_rpc.get_firewalls_for_tenant(ctx)
+ LOG.debug(_("Process router add, fw_list: '%s'"),
+   [fw['id'] for fw in fw_list])
+ for fw in fw_list:
+ +if fw['tenant_id'] == ri.router['tenant_id']:
+    self._invoke_driver_for_sync_from_plugin(
+ ctx,
+ router_info_list,
+  fw)
+ 
+ My neutron version is icehouse.

** Description changed:

  Hi all:
     when restart the vpn and l3 agent, the firewall rule apply to all tenants' 
router.
     step:
     1. Create net

[Yahoo-eng-team] [Bug 1397231] [NEW] Can't create the second vpn-site-conn

2014-11-28 Thread yangzhenyu
Public bug reported:

Hi all,
   I can't create the second vpn-site-conn, and restart the vpnaas also has 
this error:

==

2014-11-28 01:29:09.791 6215 ERROR neutron.services.vpn.device_drivers.ipsec 
[-] Failed to enable vpn process on router e78e9837-4458-48d7-9ab5-e4acdf1789ce
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec 
Traceback (most recent call last):
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec   
File 
"/usr/lib/python2.6/site-packages/neutron/services/vpn/device_drivers/ipsec.py",
 line 245, in enable
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec
 self.restart()
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec   
File 
"/usr/lib/python2.6/site-packages/neutron/services/vpn/device_drivers/ipsec.py",
 line 345, in restart
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec
 self.start()
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec   
File 
"/usr/lib/python2.6/site-packages/neutron/services/vpn/device_drivers/ipsec.py",
 line 390, in start
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec
 '--virtual_private', virtual_private
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec   
File 
"/usr/lib/python2.6/site-packages/neutron/services/vpn/device_drivers/ipsec.py",
 line 317, in _execute
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec
 check_exit_code=check_exit_code)
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec   
File "/usr/lib/python2.6/site-packages/neutron/agent/linux/ip_lib.py", line 
466, in execute
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec
 check_exit_code=check_exit_code)
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec   
File "/usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py", line 76, 
in execute
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec
 raise RuntimeError(m)
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec 
RuntimeError:
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', exec', 'qrouter-e78e9837-4458-48d7-9ab5-e4acdf1789ce', 'ipsec', 
'pluto', '--ctlbase', 
'/var/lib/neutron/ipsec/e78e9837-4458-48d7-9ab5-e4acdf1789ce/var/run/pluto', 
'--ipsecdir','/var/lib/neutron/ipsec/e78e9837-4458-48d7-9ab5-e4acdf1789ce/etc', 
'--use-netkey', '--uniqueids', '--nat_traversal', '--secretsfile', 
/var/lib/neutron/ipsec/e78e9837-4458-48d7-9ab5-e4acdf1789ce/etc/ipsec.secrets','--virtual_private',
 '%v4:22.22.22.0/24,%v4:11.11.11.0/24']
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec 
Exit code: 10
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec 
Stdout: ''
2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec 
Stderr: 'adjusting ipsec.d to 
/var/lib/neutron/ipsec/e78e9837-4458-48d7-9ab5-e4acdf1789ce/etc\npluto:lock 
file 
"/var/lib/neutron/ipsec/e78e9837-4458-48d7-9ab5-e4acdf1789ce/var/run/pluto.pid" 
already exists\n'

==

My env is openstack icehouse and system is centos 6.5.

I add some code then it is ok:

def stop(self):
#Stop process using whack
#Note this will also stop pluto
self.disconnect()
self._execute([self.binary,
   'whack',
   '--ctlbase', self.pid_path,
   '--shutdown',
   ])
#delete the pid file
  + pid_file = self.pid_path + '.pid'
  + if os.path.exists(pid_file):
  ++os.remove(pid_file)
#clean connection_status info
self.connection_status = {}

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  Hi all,
- I can't create the second vpn-site-conn, and restart the vpnaas also 
has this error:
+ I can't create the second vpn-site-conn, and restart the vpnaas also 
has this error:
  
+ 
==
  
  2014-11-28 01:29:09.791 6215 ERROR neutron.services.vpn.device_drivers.ipsec 
[-] Failed to enable vpn process on router e78e9837-4458-48d7-9ab5-e4acdf1789ce
  2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec 
Traceback (most recent call last):
  2014-11-28 01:29:09.791 6215 TRACE neutron.services.vpn.device_drivers.ipsec  
 File 
"/usr/lib/python2.6/site-packages/neutron/services/vpn/device_drivers/ipsec.py",
 line 245, in enable
  2014-11-28 01:29:09.791 6215 TR

[Yahoo-eng-team] [Bug 1319661] [NEW] baremetal:”tftp prefix“ can cause the “cloud not find kernel image”

2014-05-15 Thread yangzhenyu
Public bug reported:

hi,
   I boot a baremetal instance,  but it has a error "could not find a kernel 
image" in the PXE . I check the file "/tftpboot/UUID/config":
label deploy
kernel /tftpboot/8cbe34a2-d7d0-44b2-a148-adb7d23bb3fb/deploy_kernel

   so I modify it :
 label deploy
 kernel 8cbe34a2-d7d0-44b2-a148-adb7d23bb3fb/deploy_kernel

   and then the PXE can work ok.

My version is Havana.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1319661

Title:
  baremetal:”tftp prefix“ can cause the “cloud not find kernel image”

Status in OpenStack Compute (Nova):
  New

Bug description:
  hi,
 I boot a baremetal instance,  but it has a error "could not find a kernel 
image" in the PXE . I check the file "/tftpboot/UUID/config":
  label deploy
  kernel /tftpboot/8cbe34a2-d7d0-44b2-a148-adb7d23bb3fb/deploy_kernel

 so I modify it :
   label deploy
   kernel 8cbe34a2-d7d0-44b2-a148-adb7d23bb3fb/deploy_kernel

 and then the PXE can work ok.

  My version is Havana.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1319661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp