[Yahoo-eng-team] [Bug 1624648] [NEW] Do not need run _enable_netfilter_for_bridges() for each new device

2016-09-17 Thread yujie
Public bug reported:

For a new device, when set security group for it, function
prepare_port_filter will be called. Then
self._enable_netfilter_for_bridges() will be executed once, which is not
needed every time.

def prepare_port_filter(self, port):
LOG.debug("Preparing device (%s) filter", port['device'])
self._remove_chains()
self._set_ports(port)
self._enable_netfilter_for_bridges()
# each security group has it own chains
self._setup_chains()
return self.iptables.apply()

We could have a look at _enabled_netfilter_for_bridges first and decide
whether to run self._enable_netfilter_for_bridges().

** Affects: neutron
 Importance: Undecided
 Assignee: yujie (16189455-d)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yujie (16189455-d)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624648

Title:
  Do not need run _enable_netfilter_for_bridges() for each new device

Status in neutron:
  New

Bug description:
  For a new device, when set security group for it, function
  prepare_port_filter will be called. Then
  self._enable_netfilter_for_bridges() will be executed once, which is
  not needed every time.

  def prepare_port_filter(self, port):
  LOG.debug("Preparing device (%s) filter", port['device'])
  self._remove_chains()
  self._set_ports(port)
  self._enable_netfilter_for_bridges()
  # each security group has it own chains
  self._setup_chains()
  return self.iptables.apply()

  We could have a look at _enabled_netfilter_for_bridges first and
  decide whether to run self._enable_netfilter_for_bridges().

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624648/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1611237] [NEW] Restart neutron-openvswitch-agent get ERROR "Switch connection timeout"

2016-08-09 Thread yujie
Public bug reported:

Environment: devstack  master, ubuntu 14.04

After ./stack.sh finished, kill the neutron-openvswitch-agent process
and then start it by /usr/bin/python /usr/local/bin/neutron-openvswitch-
agent --config-file /etc/neutron/neutron.conf --config-file
/etc/neutron/plugins/ml2/ml2_conf.ini

The log shows :
2016-08-08 11:02:06.346 ERROR ryu.lib.hub [-] hub: uncaught exception: 
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 54, in 
_launch
return func(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/ryu/controller/controller.py", 
line 97, in __call__
self.ofp_ssl_listen_port)
  File "/usr/local/lib/python2.7/dist-packages/ryu/controller/controller.py", 
line 120, in server_loop
datapath_connection_factory)
  File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 117, in 
__init__
self.server = eventlet.listen(listen_info)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/convenience.py", line 
43, in listen
sock.bind(addr)
  File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 98] Address already in use

and
ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[-] Switch connection timeout

In kilo I could start ovs-agent in this way correctly, I do not know it
is right to start ovs-agent in master.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1611237

Title:
  Restart neutron-openvswitch-agent get ERROR "Switch connection
  timeout"

Status in neutron:
  New

Bug description:
  Environment: devstack  master, ubuntu 14.04

  After ./stack.sh finished, kill the neutron-openvswitch-agent process
  and then start it by /usr/bin/python /usr/local/bin/neutron-
  openvswitch-agent --config-file /etc/neutron/neutron.conf --config-
  file /etc/neutron/plugins/ml2/ml2_conf.ini

  The log shows :
  2016-08-08 11:02:06.346 ERROR ryu.lib.hub [-] hub: uncaught exception: 
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 54, in 
_launch
  return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/ryu/controller/controller.py", 
line 97, in __call__
  self.ofp_ssl_listen_port)
File "/usr/local/lib/python2.7/dist-packages/ryu/controller/controller.py", 
line 120, in server_loop
  datapath_connection_factory)
File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 117, in 
__init__
  self.server = eventlet.listen(listen_info)
File "/usr/local/lib/python2.7/dist-packages/eventlet/convenience.py", line 
43, in listen
  sock.bind(addr)
File "/usr/lib/python2.7/socket.py", line 224, in meth
  return getattr(self._sock,name)(*args)
  error: [Errno 98] Address already in use

  and
  ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[-] Switch connection timeout

  In kilo I could start ovs-agent in this way correctly, I do not know
  it is right to start ovs-agent in master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1611237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605713] Re: neutron subnet.shared attributes out of sync with parent network and no available CLI to update subnet.shared

2016-08-07 Thread yujie
use kilo, could not reproduce, the subnet database shared attribute
follows network exactly.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605713

Title:
  neutron subnet.shared attributes out of sync with parent network and
  no available CLI to update subnet.shared

Status in neutron:
  Invalid

Bug description:
  1. create a network with shared=0.
  2. create a subnet for the network
  3. update the network with shared attributes.
  4. examine the shared in neutron subnet db, the shared remains 0 while the 
shared in network table is updated to 1.

  e.g.

  neutron net-create --provider:physical_network phsynet1 
--provider:network_type flat net-1
  neutron subnet-create net-1 192.168.30.0/24
  neutron net-update --shared net-1

  now examine the database directly for the subnet 192.168.30.0, its
  shared attribute remains as 0.

  There is no CLI available to update it.

  versions tested:
  neutron stable/kilo
  neutron client: 2.6.0
  linux distro: Linux 12.04, Linux 14.04

  Expected, one of the following solutions:
  1). remove subnet.shared attribute and use its parent shared attribute.
  2). have a mechanism to update subnet.shared

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593967] Re: neutron command need filtering

2016-06-19 Thread yujie
** Project changed: neutron => python-neutronclient

** Changed in: python-neutronclient
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593967

Title:
  neutron command need filtering

Status in python-neutronclient:
  New

Bug description:
  When existing many neutron ports, it will cost much time to list port. We 
need filtering this ports by host_id/device_owner/security_groups and so on.
  Now when using nova command it could filtering name/host and many more. 
Neutron lack this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1593967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593967] [NEW] neutron command need filtering

2016-06-18 Thread yujie
Public bug reported:

When existing many neutron ports, it will cost much time to list port. We need 
filtering this ports by host_id/device_owner/security_groups and so on.
Now when using nova command it could filtering name/host and many more. Neutron 
lack this.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593967

Title:
  neutron command need filtering

Status in neutron:
  New

Bug description:
  When existing many neutron ports, it will cost much time to list port. We 
need filtering this ports by host_id/device_owner/security_groups and so on.
  Now when using nova command it could filtering name/host and many more. 
Neutron lack this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1589400] [NEW] In func tunnel_sync flood flows were added many times leading to slow start

2016-06-06 Thread yujie
Public bug reported:

When start ovs-agent, func tunnel_sync will call _setup_tunnel_port when l2_pop 
is not enabled.
For each tunnel _setup_tunnel_port will be called once, every call will add 
flood flows to br-tun. But one time call is enough.

def tunnel_sync(self):
LOG.debug("Configuring tunnel endpoints to other OVS agents")

try:
for tunnel_type in self.tunnel_types:
details = self.plugin_rpc.tunnel_sync(self.context,
  self.local_ip,
  tunnel_type,
  self.conf.host)
if not self.l2_pop:
tunnels = details['tunnels']
for tunnel in tunnels:
if self.local_ip != tunnel['ip_address']:
remote_ip = tunnel['ip_address']
tun_name = self.get_tunnel_name(
tunnel_type, self.local_ip, remote_ip)
if tun_name is None:
continue
self._setup_tunnel_port(self.tun_br,
tun_name,
tunnel['ip_address'],
tunnel_type)

In _setup_tunnel_port, this code add flood flows:
if ofports and not self.l2_pop:
# Update flooding flows to include the new tunnel
for vlan_mapping in list(self.local_vlan_map.values()):
if vlan_mapping.network_type == tunnel_type:
br.install_flood_to_tun(vlan_mapping.vlan,
vlan_mapping.segmentation_id,
ofports)

** Affects: neutron
 Importance: Undecided
 Assignee: yujie (16189455-d)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yujie (16189455-d)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1589400

Title:
  In func tunnel_sync flood flows were added many times leading to slow
  start

Status in neutron:
  New

Bug description:
  When start ovs-agent, func tunnel_sync will call _setup_tunnel_port when 
l2_pop is not enabled.
  For each tunnel _setup_tunnel_port will be called once, every call will add 
flood flows to br-tun. But one time call is enough.

  def tunnel_sync(self):
  LOG.debug("Configuring tunnel endpoints to other OVS agents")

  try:
  for tunnel_type in self.tunnel_types:
  details = self.plugin_rpc.tunnel_sync(self.context,
self.local_ip,
tunnel_type,
self.conf.host)
  if not self.l2_pop:
  tunnels = details['tunnels']
  for tunnel in tunnels:
  if self.local_ip != tunnel['ip_address']:
  remote_ip = tunnel['ip_address']
  tun_name = self.get_tunnel_name(
  tunnel_type, self.local_ip, remote_ip)
  if tun_name is None:
  continue
  self._setup_tunnel_port(self.tun_br,
  tun_name,
  tunnel['ip_address'],
  tunnel_type)

  In _setup_tunnel_port, this code add flood flows:
  if ofports and not self.l2_pop:
  # Update flooding flows to include the new tunnel
  for vlan_mapping in list(self.local_vlan_map.values()):
  if vlan_mapping.network_type == tunnel_type:
  br.install_flood_to_tun(vlan_mapping.vlan,
  vlan_mapping.segmentation_id,
  ofports)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1589400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1578132] [NEW] allowed-address-pairs only update ipset on one compute node

2016-05-04 Thread yujie
Public bug reported:

1. Two vms run on the same network but different compute nodes.
 vm1(100.100.100.3) on CN1
 vm2(100.100.100.4) on CN2
2. both vms bind to securitygroup sg1, sg1 has two rules:
 a) egress: all protocol, 0.0.0.0/0
 b) ingress: all protocol, remote sg: sg1
3. vm1 and vm2 could ping each other successfully as we expect.
4. update port belong to vm1 by using:   neutron port-update 
4d436802-fa9f-4552-97ee-7626f691b8ca  --allowed-address-pairs type=dict 
list=true ip_address=100.100.100.10
5. change IP of vm1 to 100.100.100.10. Now vm2 could ping vm1 successfully, 
but vm1 could not ping vm2.

Then check the ipset on CN1:   ipset list
Name: NETIPv4f766bf09-a5fa-4901-9
Type: hash:net
Revision: 3
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 16880
References: 1
Members:
100.100.100.3
100.100.100.10
100.100.100.4

Check ipset on CN2: ipset list
Name: NETIPv4f766bf09-a5fa-4901-9
Type: hash:net
Revision: 3
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 16848
References: 1
Members:
100.100.100.4
100.100.100.3

If add the IP (100.100.100.10) to IPSET NETIPv4f766bf09-a5fa-4901-9  on
CN2 , vm1 could ping vm2 successfully.

I use kilo release, not sure master have this problem.

** Affects: neutron
 Importance: Undecided
 Assignee: yujie (16189455-d)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yujie (16189455-d)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1578132

Title:
  allowed-address-pairs only update ipset on one compute node

Status in neutron:
  New

Bug description:
  1. Two vms run on the same network but different compute nodes.
   vm1(100.100.100.3) on CN1
   vm2(100.100.100.4) on CN2
  2. both vms bind to securitygroup sg1, sg1 has two rules:
   a) egress: all protocol, 0.0.0.0/0
   b) ingress: all protocol, remote sg: sg1
  3. vm1 and vm2 could ping each other successfully as we expect.
  4. update port belong to vm1 by using:   neutron port-update 
4d436802-fa9f-4552-97ee-7626f691b8ca  --allowed-address-pairs type=dict 
list=true ip_address=100.100.100.10
  5. change IP of vm1 to 100.100.100.10. Now vm2 could ping vm1 
successfully, but vm1 could not ping vm2.

  Then check the ipset on CN1:   ipset list
  Name: NETIPv4f766bf09-a5fa-4901-9
  Type: hash:net
  Revision: 3
  Header: family inet hashsize 1024 maxelem 65536
  Size in memory: 16880
  References: 1
  Members:
  100.100.100.3
  100.100.100.10
  100.100.100.4

  Check ipset on CN2: ipset list
  Name: NETIPv4f766bf09-a5fa-4901-9
  Type: hash:net
  Revision: 3
  Header: family inet hashsize 1024 maxelem 65536
  Size in memory: 16848
  References: 1
  Members:
  100.100.100.4
  100.100.100.3

  If add the IP (100.100.100.10) to IPSET NETIPv4f766bf09-a5fa-4901-9
  on CN2 , vm1 could ping vm2 successfully.

  I use kilo release, not sure master have this problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1578132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570171] [NEW] ip_conntrack only delete one direction entry

2016-04-13 Thread yujie
Public bug reported:

The test was used neutron master.
I use devstack create one net and two vm on this net, vm1 fixed-ip is: 
10.0.0.3, vm2 fixed-ip is: 10.0.0.4.
Both vm bind sg1:
   rule1: ingress, any protocol, any remote ip prefix
   rule2: egress, any protocol, any remote ip prefix

1. vm1 ping vm2 and vm2 ping vm1, the conntrack will be:
$ sudo conntrack -L -p icmp
icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=4 use=1
icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=4 use=1
icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=3 use=1
icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=3 use=1
conntrack v1.4.1 (conntrack-tools): 4 flow entries have been shown.

2. vm2 unbind sg1, the conntrack turn to:
$ sudo conntrack -L -p icmp
icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=4 use=1
icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=4 use=1
icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=3 use=1
conntrack v1.4.1 (conntrack-tools): 3 flow entries have been shown.

Now vm1 could not connect vm2, which is right; but vm2 could still ping
vm1 successfully.  The entry "icmp 1 29 src=10.0.0.4 dst=10.0.0.3
type=8 code=0 id=22017 src=10.0.0.3 dst=10.0.0.4 type=0 code=0 id=22017
mark=0 zone=3 use=1" was not delete as expect.

** Affects: neutron
 Importance: Undecided
 Assignee: yujie (16189455-d)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yujie (16189455-d)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570171

Title:
  ip_conntrack only delete one direction entry

Status in neutron:
  New

Bug description:
  The test was used neutron master.
  I use devstack create one net and two vm on this net, vm1 fixed-ip is: 
10.0.0.3, vm2 fixed-ip is: 10.0.0.4.
  Both vm bind sg1:
     rule1: ingress, any protocol, any remote ip prefix
     rule2: egress, any protocol, any remote ip prefix

  1. vm1 ping vm2 and vm2 ping vm1, the conntrack will be:
  $ sudo conntrack -L -p icmp
  icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=4 use=1
  icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=4 use=1
  icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=3 use=1
  icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=3 use=1
  conntrack v1.4.1 (conntrack-tools): 4 flow entries have been shown.

  2. vm2 unbind sg1, the conntrack turn to:
  $ sudo conntrack -L -p icmp
  icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=4 use=1
  icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=4 use=1
  icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=3 use=1
  conntrack v1.4.1 (conntrack-tools): 3 flow entries have been shown.

  Now vm1 could not connect vm2, which is right; but vm2 could still
  ping vm1 successfully.  The entry "icmp 1 29 src=10.0.0.4
  dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 dst=10.0.0.4 type=0
  code=0 id=22017 mark=0 zone=3 use=1" was not delete as expect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537962] [NEW] Add time_stamp parameter to router

2016-01-25 Thread yujie
Public bug reported:

Some times, when we want to analysis and debug some problems from the error 
log, we need the info of when a router is created, when setting gateway to the 
router and adding subnet to the router.
But now there is no info about this.  Could we add this parameter?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537962

Title:
  Add time_stamp parameter to router

Status in neutron:
  New

Bug description:
  Some times, when we want to analysis and debug some problems from the error 
log, we need the info of when a router is created, when setting gateway to the 
router and adding subnet to the router.
  But now there is no info about this.  Could we add this parameter?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1537962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536088] [NEW] give net a parameter about launched_at time

2016-01-20 Thread yujie
Public bug reported:

When we create an instance, it will record the launched_at time. But for net 
and subnet, no such info is provided.
For network manager, the time is important to locate network problems, 
especially when check the log file.
Could we add this info for net and subnet?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1536088

Title:
  give net a parameter about launched_at time

Status in neutron:
  New

Bug description:
  When we create an instance, it will record the launched_at time. But for net 
and subnet, no such info is provided.
  For network manager, the time is important to locate network problems, 
especially when check the log file.
  Could we add this info for net and subnet?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1536088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534113] [NEW] default sg could add same rule as original egress ipv4 rule

2016-01-14 Thread yujie
Public bug reported:

In default securitygroup,  we could add a rule in default same as the
original egress ipv4 rule.

Reproduce step: 
# neutron security-group-rule-create --direction egress --remote-ip-prefix 
0.0.0.0/0 default

It returns:
Created a new security_group_rule:
+---+--+
| Field | Value|
+---+--+
| direction | egress   |
| ethertype | IPv4 |
| id| d8f968e2-270b-4d6e-a2d0-a408726b7edc |
| port_range_max|  |
| port_range_min|  |
| protocol  |  |
| remote_group_id   |  |
| remote_ip_prefix  | 0.0.0.0/0|
| security_group_id | 9a2c0d86-4a36-46d4-a4da-43a239003eef |
| tenant_id | 52953da91c0e47528d5317867391aaec |
+---+--+

Actually we expect that "Security group rule already exists. Rule id is
x".

** Affects: neutron
 Importance: Undecided
 Assignee: yujie (16189455-d)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yujie (16189455-d)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534113

Title:
  default sg could add same rule as original egress ipv4 rule

Status in neutron:
  New

Bug description:
  In default securitygroup,  we could add a rule in default same as the
  original egress ipv4 rule.

  Reproduce step: 
  # neutron security-group-rule-create --direction egress --remote-ip-prefix 
0.0.0.0/0 default

  It returns:
  Created a new security_group_rule:
  +---+--+
  | Field | Value|
  +---+--+
  | direction | egress   |
  | ethertype | IPv4 |
  | id| d8f968e2-270b-4d6e-a2d0-a408726b7edc |
  | port_range_max|  |
  | port_range_min|  |
  | protocol  |  |
  | remote_group_id   |  |
  | remote_ip_prefix  | 0.0.0.0/0|
  | security_group_id | 9a2c0d86-4a36-46d4-a4da-43a239003eef |
  | tenant_id | 52953da91c0e47528d5317867391aaec |
  +---+--+

  Actually we expect that "Security group rule already exists. Rule id
  is x".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519509] Re: Creating instances fails after enabling port_security extension on existing deployment

2015-11-24 Thread yujie
The question is the same as
https://bugs.launchpad.net/neutron/+bug/1461519, which fixed in Liberty.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1519509

Title:
  Creating instances fails after enabling port_security extension on
  existing deployment

Status in neutron:
  Invalid

Bug description:
  Creating instances fails after enabling the port_security extension on
  an existing deployment.

  1) Create necessary network components to launch an instance.
  2) Launch an instance.
  3) Enable the port_security extension and restart the neutron-server service.
  4) Launch another instance which fails with the following errors in the 
nova-compute service log: 

  http://paste.openstack.org/show/479924/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1519509/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514318] Re: running vm can not change its gateway and ip as the dhcp server send

2015-11-09 Thread yujie
** Description changed:

  In kilo + dvr environment, a running vm could not get the ip address sent by 
dhcp server.
  1. Create a net with one subnet, setting no enable dhcp.
  2. Create a vm in the subnet above, after it created successfully, it has no 
ip.
  3. Setting the subnet enable dhcp, let the cirros vm created in step 2 exec 
"sudo udhcpc".
- Capturing packets in network node, the dhcp packet looks fine. But vm has 
no ip yet.
+ Capturing packets in network node, the dhcp packet looks fine. But vm has 
no ip yet.
  4. Reboot vm, it will get ip address.
  
  The gateway setting on subnet has the same problem.

** Project changed: neutron => cirros

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514318

Title:
  running vm can not change its gateway and ip as the dhcp server send

Status in CirrOS:
  New

Bug description:
  In kilo + dvr environment, a running vm could not get the ip address sent by 
dhcp server.
  1. Create a net with one subnet, setting no enable dhcp.
  2. Create a vm in the subnet above, after it created successfully, it has no 
ip.
  3. Setting the subnet enable dhcp, let the cirros vm created in step 2 exec 
"sudo udhcpc".
  Capturing packets in network node, the dhcp packet looks fine. But vm has 
no ip yet.
  4. Reboot vm, it will get ip address.

  The gateway setting on subnet has the same problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cirros/+bug/1514318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1514318] [NEW] running vm can not change its gateway and ip as the dhcp server send

2015-11-08 Thread yujie
Public bug reported:

In kilo + dvr environment, a running vm could not get the ip address sent by 
dhcp server.
1. Create a net with one subnet, setting no enable dhcp.
2. Create a vm in the subnet above, after it created successfully, it has no ip.
3. Setting the subnet enable dhcp, let the cirros vm created in step 2 exec 
"sudo udhcpc".
Capturing packets in network node, the dhcp packet looks fine. But vm has 
no ip yet.
4. Reboot vm, it will get ip address.

The gateway setting on subnet has the same problem.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1514318

Title:
  running vm can not change its gateway and ip as the dhcp server send

Status in neutron:
  New

Bug description:
  In kilo + dvr environment, a running vm could not get the ip address sent by 
dhcp server.
  1. Create a net with one subnet, setting no enable dhcp.
  2. Create a vm in the subnet above, after it created successfully, it has no 
ip.
  3. Setting the subnet enable dhcp, let the cirros vm created in step 2 exec 
"sudo udhcpc".
  Capturing packets in network node, the dhcp packet looks fine. But vm has 
no ip yet.
  4. Reboot vm, it will get ip address.

  The gateway setting on subnet has the same problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1514318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512199] [NEW] change vm fixed ips will cause unable to communicate to vm in other network

2015-11-01 Thread yujie
Public bug reported:

I use dvr+kilo,  vxlan.  The environment is like:

vm2-2<- compute1  --vxlan-  comupte2 ->vm2-1
vm3-1<-

vm2-1<- net2  -router1- net3 ->vm3-1
vm2-2<-


vm2-1(192.168.2.3) and vm2-2(192.168.2.4) are in the same net(net2 
192.168.2.0/24) but not assigned to the same compute node. vm3-1 is in 
net3(192.168.3.0/24). net2 and net3 are connected by router1. The three vms are 
in default security-group. Not use firewall.

1. Using command below to change the ip of vm2-1.
neutron port-update portID  --fixed-ip 
subnet_id=subnetID,ip_address=192.168.2.10 --fixed-ip 
subnet_id=subnetID,ip_address=192.168.2.20
In vm2-1 using "sudo udhcpc"(carrios) to get ip, the dhcp message is correct 
but the ip not changed.
Then reboot vm2-1. The ip of vm2-1 turned to be 192.168.2.20.

2. Using vm2-2 could ping 192.168.2.20 successfully . But vm3-1 could
not ping 192.168.2.20 successfully.

By capturing packets and looking for related information, the reason maybe:
1. newIP(192.168.2.20) and MAC of vm2-1 was not wrote to arp cache in the 
namespace of router1 in compute1 node.
2. In dvr mode, the arp request from gw port(192.168.2.1) from compute1 to 
vm2-1 was dropped by flowtable in compute2. So the arp 
request(192.168.2.1->192.168.2.20) could not arrive at vm2-1.
3. For vm2-2, the arp request(192.168.2.4->192.168.2.20) was not dropped and 
could connect with vm2-1.

In my opinion, if both new fixed IPs of vm2-1(192.168.2.10 and
102.168.2.20) and MAC is wrote to arp cache in namespace of router1 in
compute1 node, the problem will resolved. But only one ip(192.168.2.10)
and MAC is wrote.

BTW, if only set one fixed ip for vm2-1, it works fine. But if set two
fixed ips for vm2-1, the problem above most probably happens.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  I use dvr+kilo,  vxlan.  The environment is like:
-  __ __
- | compute1  |   | comupte2   |
 router1
- |__|  ---vxlan---  |__ |  
 /   \
-   /\| 
 net2net3
- vm2-2   vm3-1  vm2-1  
 / \  |
-   
  vm2-1  vm2-2vm3-1
+  _   _
+ | compute1  |  | comupte2  |  
  
+ |_|  ---vxlan---   | |
 
+   /\| 
   
+ vm2-2   vm3-1   vm2-1 
   
+   
  
  
- vm2-1(192.168.2.3) and vm2-2(192.168.2.4) are in the same net(net2
- 192.168.2.0/24) but not assigned to the same compute node. vm3-1 is in
- net3(192.168.3.0/24). net2 and net3 are connected by router1. The three
- vms are in default security-group. Not use firewall.
+   router1
+   /\
+net2  net3
+   /\|
+ vm2-1  vm2-2 vm3-1
+ vm2-1(192.168.2.3) and vm2-2(192.168.2.4) are in the same net(net2 
192.168.2.0/24) but not assigned to the same compute node. vm3-1 is in 
net3(192.168.3.0/24). net2 and net3 are connected by router1. The three vms are 
in default security-group. Not use firewall.
  
- 1. Using command below to change the ip of vm2-1. 
+ 1. Using command below to change the ip of vm2-1.
  neutron port-update portID  --fixed-ip 
subnet_id=subnetID,ip_address=192.168.2.10 --fixed-ip 
subnet_id=subnetID,ip_address=192.168.2.20
- In vm2-1 using "sudo udhcpc"(carrios) to get ip, the dhcp message is correct 
but the ip not changed. 
+ In vm2-1 using "sudo udhcpc"(carrios) to get ip, the dhcp message is correct 
but the ip not changed.
  Then reboot vm2-1. The ip of vm2-1 turned to be 192.168.2.20.
  
  2. Using vm2-2 could ping 192.168.2.20 successfully . But vm3-1 could
  not ping 192.168.2.20 successfully.
  
  By capturing packets and looking for related information, the reason maybe:
  1. newIP(192.168.2.20) and MAC of vm2-1 was not wrote to arp cache in the 
namespace of router1 in compute1 node.
  2. In dvr mode, the arp request from gw port(192.168.2.1) from compute1 to 
vm2-1 was dropped by flowtable in compute2. So the arp 
request(192.168.2.1->192.168.2.20) could not arrive at vm2-1.
  3. For vm2-2, the arp request(192.168.2.4->192.168.2.20) was not dropped and 
could connect with vm2-1.
  
  In my opinion, if both new fixed IPs of vm2-1(192.168.2.10 and
  102.168.2.20) and MAC is wrote to 

[Yahoo-eng-team] [Bug 1483091] Re: Same name SecurityGroup could not work

2015-08-11 Thread yujie
** Changed in: openstack-manuals
   Status: Invalid = New

** Project changed: openstack-manuals = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483091

Title:
  Same name SecurityGroup could not work

Status in neutron:
  New

Bug description:
  In icehouse, if two tenants create a security group with the same name
  respectively, then they could not create a vm in the dashboard using
  this security group, with the error says Multiple security_group
  matches found for name 'test', use an ID to be more specific. (HTTP
  409) (Request-ID: req-ece4dd00-d1a0-4c38-9587-394fa29610da).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1264829] [NEW] NameError: name 'port' is not defined

2013-12-29 Thread Du Yujie
Public bug reported:

Create Instance:

if api.neutron.is_port_profiles_supported():
net_id = context['network_id'][0]
LOG.debug(Horizon-Create Port with %(netid)s %(profile_id)s,
  {'netid': net_id, 'profile_id': context['profile_id']})
try:
port = api.neutron.port_create(request, net_id,
   policy_profile_id=
   context['profile_id'])
except Exception:
msg = (_('Port not created for profile-id (%s).') %
   context['profile_id'])
exceptions.handle(request, msg)
if port and port.id:
nics = [{port-id: port.id}]

if error raised during  api.neutron.port_create calling, NameError would
met at if port and port.id

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1264829

Title:
  NameError: name 'port' is not defined

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Create Instance:

  if api.neutron.is_port_profiles_supported():
  net_id = context['network_id'][0]
  LOG.debug(Horizon-Create Port with %(netid)s %(profile_id)s,
{'netid': net_id, 'profile_id': context['profile_id']})
  try:
  port = api.neutron.port_create(request, net_id,
 policy_profile_id=
 context['profile_id'])
  except Exception:
  msg = (_('Port not created for profile-id (%s).') %
 context['profile_id'])
  exceptions.handle(request, msg)
  if port and port.id:
  nics = [{port-id: port.id}]

  if error raised during  api.neutron.port_create calling, NameError
  would met at if port and port.id

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1264829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp