[Yahoo-eng-team] [Bug 1591857] [NEW] 'segments' set to service_plugins should be 'segment'

2016-06-12 Thread Itsuro Oda
Public bug reported:

This is about segment extension.

Alias of segment extension is 'segment'.
service_plugins in neutron.conf must be set 'segments'.
This causes confusion (at least for me).
I think it is better that they are same. (I think definition of service_plugins 
should be 'segment' (i.e. fix setup.cfg))

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591857

Title:
  'segments' set to service_plugins should be 'segment'

Status in neutron:
  New

Bug description:
  This is about segment extension.

  Alias of segment extension is 'segment'.
  service_plugins in neutron.conf must be set 'segments'.
  This causes confusion (at least for me).
  I think it is better that they are same. (I think definition of 
service_plugins should be 'segment' (i.e. fix setup.cfg))

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580377] [NEW] conntrack entry is not deleted when security_groups_member_updated

2016-05-10 Thread Itsuro Oda
Public bug reported:

When remote group member changed, conntrack entry which should be
deleted is not deleted.

How to reproduce:
* create a VM(VM1) on host-1 (net-a, default security-group) (ex. 10.0.0.3)
* create a VM(VM2) on host-2 (net-a, default security-group) (ex. 10.0.0.4)
* ssh from VM1(10.0.0.3) to VM2(10.0.0.4)
---
host-2:$ sudo conntrack -L |grep 10.0
tcp  6 431986 ESTABLISHED src=10.0.0.3 dst=10.0.0.4 sport=45074 dport=22 
src=10.0.0.4 dst=10.0.0.3 sport=22 dport=45074 [ASSURED] mark=0 zone=1 use=1

host-2:$ sudo ipset list
Name: NIPv492469920-ef76-44af-98c7-
Type: hash:net
Revision: 4
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 16824
References: 1
Members:
10.0.0.4
10.0.0.3
---

* terminate VM1  (nova delete VM1) 
expected: the conntrack entry showed above is deleted.
actual: not deleted
---
host-2:$ sudo conntrack -L |grep 10.0
tcp  6 431986 ESTABLISHED src=10.0.0.3 dst=10.0.0.4 sport=45074 dport=22 
src=10.0.0.4 dst=10.0.0.3 sport=22 dport=45074 [ASSURED] mark=0 zone=1 use=1

host-2:$ sudo ipset list
Name: NIPv492469920-ef76-44af-98c7-
Type: hash:net
Revision: 4
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 16824
References: 1
Members:
10.0.0.4
---

Applied:
liberty, mitaka, master

Investigation:
summary - devices_with_updated_sg_members is consumed by remove_devices_filter 
unintentionally.
* when ovs-agent receives security_groups_member_updated RPC call, 
  sg_ids and affected devices are registered to 
self.firewall.devices_with_updated_sg_members.
  (original intention is that it is handled when refresh_firewall is called. 
but...)
* in the main loop of ovs-agent process_deleted_ports is executed before 
process_network_ports.
  process_deleted_ports calls self.sg_agent.remove_devices_filter.
  if there is any deleted port, 
  remove_devices_filter
  -> defer_apply_off
  -> _remove_conntrack_entries_from_sg_update
  -> _clean_deleted_remote_sg_members_conntrack_entries
  is called. 
  at this time pre_sg_members and sg_members are same since no port info is 
  updated in remove_devices_filter. so no conntrack entry is removed. 
  but nonetheless devices_with_updated_sg_members is cleared !!
* afterwards
  process_network_ports
  -> setup_port_filters -> refresh_firewall -> defer_apply_off
  ...-> _clean_deleted_remote_sg_members_conntrack_entries
  is called.
  at this time pre_sg_members and sg_members are different since port info was 
updated.
  but no conntrack entry is removed since devices_with_updated_sg_members was 
cleared.

Note:
deleting conntrack entries was introduced by 
https://bugs.launchpad.net/neutron/+bug/1335375.
note that conntrack zone was per network at this time. (per port now)
The fix of 1335375 is complicated and I wonder it is incomplete (ex. no care of 
egress & remote-group rule).
How about simplify to just record affected ports and do "conntrack -D -w 
" 
for affected ports ?

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: sg-fw

** Tags added: sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580377

Title:
  conntrack entry is not deleted when security_groups_member_updated

Status in neutron:
  New

Bug description:
  When remote group member changed, conntrack entry which should be
  deleted is not deleted.

  How to reproduce:
  * create a VM(VM1) on host-1 (net-a, default security-group) (ex. 10.0.0.3)
  * create a VM(VM2) on host-2 (net-a, default security-group) (ex. 10.0.0.4)
  * ssh from VM1(10.0.0.3) to VM2(10.0.0.4)
  ---
  host-2:$ sudo conntrack -L |grep 10.0
  tcp  6 431986 ESTABLISHED src=10.0.0.3 dst=10.0.0.4 sport=45074 dport=22 
src=10.0.0.4 dst=10.0.0.3 sport=22 dport=45074 [ASSURED] mark=0 zone=1 use=1

  host-2:$ sudo ipset list
  Name: NIPv492469920-ef76-44af-98c7-
  Type: hash:net
  Revision: 4
  Header: family inet hashsize 1024 maxelem 65536
  Size in memory: 16824
  References: 1
  Members:
  10.0.0.4
  10.0.0.3
  ---

  * terminate VM1  (nova delete VM1) 
  expected: the conntrack entry showed above is deleted.
  actual: not deleted
  ---
  host-2:$ sudo conntrack -L |grep 10.0
  tcp  6 431986 ESTABLISHED src=10.0.0.3 dst=10.0.0.4 sport=45074 dport=22 
src=10.0.0.4 dst=10.0.0.3 sport=22 dport=45074 [ASSURED] mark=0 zone=1 use=1

  host-2:$ sudo ipset list
  Name: NIPv492469920-ef76-44af-98c7-
  Type: hash:net
  Revision: 4
  Header: family inet hashsize 1024 maxelem 65536
  Size in memory: 16824
  References: 1
  Members:
  10.0.0.4
  ---

  Applied:
  liberty, mitaka, master

  Investigation:
  summary - devices_with_updated_sg_members is consumed by 
remove_devices_filter unintentionally.
  * when ovs-agent receives security_groups_member_updated RPC call, 
sg_ids and affected devices are registered to 
self.firewall.devices_with_updated_sg_members.
(original intention is that it is handled when 

[Yahoo-eng-team] [Bug 1515879] [NEW] port can't be created/updated with different tenant's security-group

2015-11-12 Thread Itsuro Oda
Public bug reported:

It is available in icehouse.

0. assume admin user executes
1. $ neutron security-group-create --tenant-id tenant1 sec1
2. $ neutron port-create --tenant-id tenant2 --security-groutp  
net1
  success

But current system (juno and later):
port-create fails with "Security group  does not exist".

This is reported by my customer who uses icehouse currently and plans to 
upgrade to recent release.
This is real use case though above example is simplified a lot.

 This is cased by the following fix:
 https://review.openstack.org/#/c/123187/
I think incompatibility was introduced unintentionally by the fix.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: sg-fw

** Tags added: sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515879

Title:
  port can't be created/updated with different tenant's security-group

Status in neutron:
  New

Bug description:
  It is available in icehouse.

  0. assume admin user executes
  1. $ neutron security-group-create --tenant-id tenant1 sec1
  2. $ neutron port-create --tenant-id tenant2 --security-groutp  
net1
success

  But current system (juno and later):
  port-create fails with "Security group  does not exist".

  This is reported by my customer who uses icehouse currently and plans to 
upgrade to recent release.
  This is real use case though above example is simplified a lot.

   This is cased by the following fix:
   https://review.openstack.org/#/c/123187/
  I think incompatibility was introduced unintentionally by the fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490767] [NEW] DB migration of geneve type driver should be 'expand'

2015-08-31 Thread Itsuro Oda
Public bug reported:

https://review.openstack.org/#/c/187945/

It only do adding table only definitely. So it should be in 'expand'
directory.

It was merged already. I don't know how to fix though.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490767

Title:
  DB migration of geneve type driver should be 'expand'

Status in neutron:
  New

Bug description:
  https://review.openstack.org/#/c/187945/

  It only do adding table only definitely. So it should be in 'expand'
  directory.

  It was merged already. I don't know how to fix though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428887] [NEW] Unable to communicate to floatingip on a same network

2015-03-05 Thread Itsuro Oda
Public bug reported:

If one try to communicate from a tenant network to floatingip which attached to
a port on the same network, the communication fails.


for example, unable to communicate from 10.0.0.3 to 100.0.0.5

  +-   exeternal
  |   100.0.0.0/24
 +++
 | router  |
 +++
  | 10.0.0.0/24
  --+-++   internal
|  |
  10.0.0.3  10.0.0.4 
   (100.0.0.5)
-

Note that ping is not adequate to check connection.
icmp reply is returned thus ping success but the from address is different. 
---
10.0.0.3 host: $ ping 100.0.0.5
PING 100.0.0.5 (100.0.0.5) 56(84) bytes of data.
64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=3.45 ms   (must be returned from 
100.0.0.5)
---
(This is because destination address (100.0.0.5) is DNATed to fixed ip 
(10.0.0.4)
on the router, but reply does not go through the router.)

Use TCP/IP (ex. ssh) to check connection.

This problem is a regression cased by https://review.openstack.org/#/c/131905/ .
(it is my fault.)
This maybe not common use case but should be fixed since it was OK before the 
patch.

** Affects: neutron
 Importance: Undecided
 Assignee: Itsuro Oda (oda-g)
 Status: New


** Tags: l3-ipam-dhcp

** Tags added: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) = Itsuro Oda (oda-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1428887

Title:
  Unable to communicate to floatingip on a same network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If one try to communicate from a tenant network to floatingip which attached 
to
  a port on the same network, the communication fails.

  
  for example, unable to communicate from 10.0.0.3 to 100.0.0.5

+-   exeternal
|   100.0.0.0/24
   +++
   | router  |
   +++
| 10.0.0.0/24
--+-++   internal
  |  |
10.0.0.3  10.0.0.4 
 (100.0.0.5)
  -

  Note that ping is not adequate to check connection.
  icmp reply is returned thus ping success but the from address is different. 
  ---
  10.0.0.3 host: $ ping 100.0.0.5
  PING 100.0.0.5 (100.0.0.5) 56(84) bytes of data.
  64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=3.45 ms   (must be returned 
from 100.0.0.5)
  ---
  (This is because destination address (100.0.0.5) is DNATed to fixed ip 
(10.0.0.4)
  on the router, but reply does not go through the router.)

  Use TCP/IP (ex. ssh) to check connection.

  This problem is a regression cased by 
https://review.openstack.org/#/c/131905/ .
  (it is my fault.)
  This maybe not common use case but should be fixed since it was OK before the 
patch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1428887/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415304] [NEW] Metaplugin decomposition

2015-01-27 Thread Itsuro Oda
Public bug reported:

This is for Metaplugin decomposition from Neutron core according to
http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-
vendor-decomposition.html .

** Affects: neutron
 Importance: Undecided
 Assignee: Itsuro Oda (oda-g)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Itsuro Oda (oda-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1415304

Title:
  Metaplugin decomposition

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This is for Metaplugin decomposition from Neutron core according to
  http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-
  vendor-decomposition.html .

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1415304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408488] [NEW] Agent terminates services when turning admin_state_up False

2015-01-07 Thread Itsuro Oda
Public bug reported:

Currently turning admin_state_up of a dhcp/l3 agent to False causes stopping 
all services on it.
Admin_state_up should be affected only scheduling and should not terminate 
existing services.

** Affects: neutron
 Importance: Undecided
 Assignee: Itsuro Oda (oda-g)
 Status: New


** Tags: l3-ipam-dhcp

** Tags added: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) = Itsuro Oda (oda-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1408488

Title:
  Agent terminates services when turning admin_state_up False

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently turning admin_state_up of a dhcp/l3 agent to False causes stopping 
all services on it.
  Admin_state_up should be affected only scheduling and should not terminate 
existing services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1408488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386041] [NEW] can't connect external network using default snat from tenant network

2014-10-26 Thread Itsuro Oda
Public bug reported:

See the following example:
---
   --+- external network 192.168.10.0/24
 |
 | 192.168.10.10
 +---+---+
 |  r1   |  routes [{nexthop: 10.0.0.2, destination: 20.0.0.0/24}]
 +---+---+ 
 | 10.0.0.1
 |
-+-+--- tenant network1 10.0.0.0/24  (gw: 10.0.0.1)
   |
   | 10.0.0.2
   +---+---+
   |  r2   |  routes [{nexthop: 10.0.0.1, destination: 0.0.0.0/0}]
   +---+---+
   | 20.0.0.1
   |
---+ tenant network2 20.0.0.0/24 (gw: 20.0.0.1)
---

Users want to access external network from tenant network2 using default SNAT 
but can't access.
(tenant network2 is connected to r1 indirectly and set routes properly.)
Users can access external network only from tenant network1 (it is directly 
connected to r1) currently.

I think it is a bug since this restriction is unnecessary.

It is easy to fix. How about this ?
---
diff --git a/neutron/agent/l3_agent.py b/neutron/agent/l3_agent.py
index ff8ad47..097fa36 100644
--- a/neutron/agent/l3_agent.py
+++ b/neutron/agent/l3_agent.py
@@ -1445,9 +1445,8 @@ class 
L3NATAgent(firewall_l3_agent.FWaaSL3AgentRpcCallback,
 rules = [('POSTROUTING', '! -i %(interface_name)s '
   '! -o %(interface_name)s -m conntrack ! '
   '--ctstate DNAT -j ACCEPT' %
-  {'interface_name': interface_name})]
-for cidr in internal_cidrs:
-rules.extend(self.internal_network_nat_rules(ex_gw_ip, cidr))
+  {'interface_name': interface_name}),
+ ('snat', '-j SNAT --to-source %s' % ex_gw_ip)]
 return rules
 
 def _snat_redirect_add(self, ri, gateway, sn_port, sn_int):
@@ -1560,11 +1559,6 @@ class 
L3NATAgent(firewall_l3_agent.FWaaSL3AgentRpcCallback,
 self.driver.unplug(interface_name, namespace=ri.ns_name,
prefix=INTERNAL_DEV_PREFIX)
 
-def internal_network_nat_rules(self, ex_gw_ip, internal_cidr):
-rules = [('snat', '-s %s -j SNAT --to-source %s' %
- (internal_cidr, ex_gw_ip))]
-return rules
-
 def _create_agent_gateway_port(self, ri, network_id):
 Create Floating IP gateway port.
---

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ipam-dhcp

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386041

Title:
  can't connect external network using default snat from tenant network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  See the following example:
  ---
 --+- external network 192.168.10.0/24
   |
   | 192.168.10.10
   +---+---+
   |  r1   |  routes [{nexthop: 10.0.0.2, destination: 20.0.0.0/24}]
   +---+---+ 
   | 10.0.0.1
   |
  -+-+--- tenant network1 10.0.0.0/24  (gw: 10.0.0.1)
 |
 | 10.0.0.2
 +---+---+
 |  r2   |  routes [{nexthop: 10.0.0.1, destination: 0.0.0.0/0}]
 +---+---+
 | 20.0.0.1
 |
  ---+ tenant network2 20.0.0.0/24 (gw: 20.0.0.1)
  ---

  Users want to access external network from tenant network2 using default SNAT 
but can't access.
  (tenant network2 is connected to r1 indirectly and set routes properly.)
  Users can access external network only from tenant network1 (it is directly 
connected to r1) currently.

  I think it is a bug since this restriction is unnecessary.

  It is easy to fix. How about this ?
  ---
  diff --git a/neutron/agent/l3_agent.py b/neutron/agent/l3_agent.py
  index ff8ad47..097fa36 100644
  --- a/neutron/agent/l3_agent.py
  +++ b/neutron/agent/l3_agent.py
  @@ -1445,9 +1445,8 @@ class 
L3NATAgent(firewall_l3_agent.FWaaSL3AgentRpcCallback,
   rules = [('POSTROUTING', '! -i %(interface_name)s '
 '! -o %(interface_name)s -m conntrack ! '
 '--ctstate DNAT -j ACCEPT' %
  -  {'interface_name': interface_name})]
  -for cidr in internal_cidrs:
  -rules.extend(self.internal_network_nat_rules(ex_gw_ip, cidr))
  +  {'interface_name': interface_name}),
  + ('snat', '-j SNAT --to-source %s' % ex_gw_ip)]
   return rules
   
   def _snat_redirect_add(self, ri, gateway, sn_port, sn_int):
  @@ -1560,11 +1559,6 @@ class 
L3NATAgent(firewall_l3_agent.FWaaSL3AgentRpcCallback,
   self.driver.unplug(interface_name, namespace=ri.ns_name,
  prefix=INTERNAL_DEV_PREFIX)
   
  -def internal_network_nat_rules(self, ex_gw_ip, internal_cidr):
  -rules = [('snat', '-s %s -j SNAT --to-source %s' %
  -  

[Yahoo-eng-team] [Bug 1354285] [NEW] L3-agent using MetaInterfaceDriver failed

2014-08-08 Thread Itsuro Oda
Public bug reported:

MetaInterfaceDriver communicates with neutron-server using REST API.
If a user intend to use internalurl for neutron-server endpoint, 
MataInterfaceDriver fails.
This is because MetaInterfaceDriver does not specify endpoint_type. Thus it 
assumes using publicurl.
---
class MetaInterfaceDriver(LinuxInterfaceDriver):
def __init__(self, conf):
super(MetaInterfaceDriver, self).__init__(conf)
from neutronclient.v2_0 import client
self.neutron = client.Client(
username=self.conf.admin_user,
password=self.conf.admin_password,
tenant_name=self.conf.admin_tenant_name,
auth_url=self.conf.auth_url,
auth_strategy=self.conf.auth_strategy,
region_name=self.conf.auth_region
)
---

Note that MetaInterfaceDriver is used with Metaplugin only.

** Affects: neutron
 Importance: Undecided
 Assignee: Itsuro Oda (oda-g)
 Status: New


** Tags: metaplugin

** Tags added: metaplugin

** Changed in: neutron
 Assignee: (unassigned) = Itsuro Oda (oda-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354285

Title:
  L3-agent using MetaInterfaceDriver failed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  MetaInterfaceDriver communicates with neutron-server using REST API.
  If a user intend to use internalurl for neutron-server endpoint, 
MataInterfaceDriver fails.
  This is because MetaInterfaceDriver does not specify endpoint_type. Thus it 
assumes using publicurl.
  ---
  class MetaInterfaceDriver(LinuxInterfaceDriver):
  def __init__(self, conf):
  super(MetaInterfaceDriver, self).__init__(conf)
  from neutronclient.v2_0 import client
  self.neutron = client.Client(
  username=self.conf.admin_user,
  password=self.conf.admin_password,
  tenant_name=self.conf.admin_tenant_name,
  auth_url=self.conf.auth_url,
  auth_strategy=self.conf.auth_strategy,
  region_name=self.conf.auth_region
  )
  ---

  Note that MetaInterfaceDriver is used with Metaplugin only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1354285/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350172] [NEW] Building server failed in VMware Mine Sweeper

2014-07-29 Thread Itsuro Oda
Public bug reported:

VMware Mine Sweeper often failed on the patch
https://review.openstack.org/98278/ .

See logs:
http://208.91.1.172/logs/neutron/98278/11/414421/
http://208.91.1.172/logs/neutron/98278/13/414451/
etc.

It maybe a bit different between PS11 and PS13.
It is common that building server failed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1350172

Title:
  Building server failed in VMware Mine Sweeper

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  VMware Mine Sweeper often failed on the patch
  https://review.openstack.org/98278/ .

  See logs:
  http://208.91.1.172/logs/neutron/98278/11/414421/
  http://208.91.1.172/logs/neutron/98278/13/414451/
  etc.

  It maybe a bit different between PS11 and PS13.
  It is common that building server failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1350172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327000] [NEW] Should not schedule dhcp-agent when dhcp port creation

2014-06-05 Thread Itsuro Oda
Public bug reported:

Intended operation:
---
1. create network and subnet 
  neutron net-create net
  neutron subnet-create --name sub net 20.0.0.0/24
2. create port for dhcp server. It is intended to assign specific ip address.
 neutron port-create --name dhcp --device-id reserved_dhcp_port --fixed-ip 
ip_address=20.0.0.10,subnet_id=sub net -- --device_owner network:dhcp
3. then schedule dhcp-agent manually 
 neutron dhcp-agent-network-add 275f7a3f-0251-485c-aea1-9913e173dd1e net
---

But now force scheduling occurs at port creation (2.). So it is not
available to schedule manually (3.).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327000

Title:
  Should not schedule dhcp-agent when dhcp port creation

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Intended operation:
  ---
  1. create network and subnet 
neutron net-create net
neutron subnet-create --name sub net 20.0.0.0/24
  2. create port for dhcp server. It is intended to assign specific ip address.
   neutron port-create --name dhcp --device-id reserved_dhcp_port --fixed-ip 
ip_address=20.0.0.10,subnet_id=sub net -- --device_owner network:dhcp
  3. then schedule dhcp-agent manually 
   neutron dhcp-agent-network-add 275f7a3f-0251-485c-aea1-9913e173dd1e net
  ---

  But now force scheduling occurs at port creation (2.). So it is not
  available to schedule manually (3.).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1327000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312482] [NEW] scalability problem of router routes update

2014-04-24 Thread Itsuro Oda
Public bug reported:

Updating router routes takes long time when increasing a number of routes.
The critical problem is that it is CPU bound task of neutron-server and 
neutron-server can not reply for other request.

I show below a measurement example of neutron-server's CPU usage.

Setting routes to a router. (0 to N)
100 routes: 1 sec
1000 routes: 5 sec
1 routes: 51 sec

I found validation check of parameter  is inefficient. The following
example support it too.

No change but just specify same routes to a router. (N to N, DB is not changed)
100 routes: 1 sec
1000 routes: 4 sec
1 routes: 52 sec

Remove routes from a router. (N to 0)
100 routes: 1 sec
1000 routes:  8 sec
1 routes: 750 sec

I found handling of record deletion is bad. It takes N**2 order.

** Affects: neutron
 Importance: Undecided
 Assignee: Itsuro Oda (oda-g)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Itsuro Oda (oda-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1312482

Title:
  scalability problem of router routes update

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Updating router routes takes long time when increasing a number of routes.
  The critical problem is that it is CPU bound task of neutron-server and 
neutron-server can not reply for other request.

  I show below a measurement example of neutron-server's CPU usage.

  Setting routes to a router. (0 to N)
  100 routes: 1 sec
  1000 routes: 5 sec
  1 routes: 51 sec

  I found validation check of parameter  is inefficient. The following
  example support it too.

  No change but just specify same routes to a router. (N to N, DB is not 
changed)
  100 routes: 1 sec
  1000 routes: 4 sec
  1 routes: 52 sec

  Remove routes from a router. (N to 0)
  100 routes: 1 sec
  1000 routes:  8 sec
  1 routes: 750 sec

  I found handling of record deletion is bad. It takes N**2 order.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1312482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300570] [NEW] dhcp_agent fails in RPC communication with neutron-server under Metaplugin

2014-03-31 Thread Itsuro Oda
Public bug reported:

This problem occurs when ml2 plugin runs under Metaplugin.

error log of dhcp_agent is as follows:
---
2014-03-28 18:57:17.062 ERROR neutron.agent.dhcp_agent 
[req-9c53d7a6-d850-42de-896f-184827b33bfd None None] Failed reporting state!
2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent Traceback (most recent 
call last):
2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/dhcp_agent.py, line 564, in _report_state
2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent 
self.state_rpc.report_state(ctx, self.agent_state, self.use_call)
2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/rpc.py, line 72, in report_state
2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent return 
self.call(context, msg, topic=self.topic)
2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/openstack/common/rpc/proxy.py, line 129, in call
2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent exc.info, 
real_topic, msg.get('method'))
2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent Timeout: Timeout while 
waiting on RPC response - topic: q-plugin, RPC method: report_state info: 
unknown 
2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent 
---

This problem is brought by the patch:
 https://review.openstack.org/#/c/72565/
because ml2 plguin does not become to open RPC connection at plugin 
initialization.

** Affects: neutron
 Importance: Undecided
 Assignee: Itsuro Oda (oda-g)
 Status: New


** Tags: icehouse-rc-potential

** Changed in: neutron
 Assignee: (unassigned) = Itsuro Oda (oda-g)

** Tags added: icehouse-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1300570

Title:
  dhcp_agent fails in RPC communication with neutron-server under
  Metaplugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This problem occurs when ml2 plugin runs under Metaplugin.

  error log of dhcp_agent is as follows:
  ---
  2014-03-28 18:57:17.062 ERROR neutron.agent.dhcp_agent 
[req-9c53d7a6-d850-42de-896f-184827b33bfd None None] Failed reporting state!
  2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent Traceback (most recent 
call last):
  2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/dhcp_agent.py, line 564, in _report_state
  2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent 
self.state_rpc.report_state(ctx, self.agent_state, self.use_call)
  2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/agent/rpc.py, line 72, in report_state
  2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent return 
self.call(context, msg, topic=self.topic)
  2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent   File 
/opt/stack/neutron/neutron/openstack/common/rpc/proxy.py, line 129, in call
  2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent exc.info, 
real_topic, msg.get('method'))
  2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent Timeout: Timeout while 
waiting on RPC response - topic: q-plugin, RPC method: report_state info: 
unknown 
  2014-03-28 18:57:17.062 TRACE neutron.agent.dhcp_agent 
  ---

  This problem is brought by the patch:
   https://review.openstack.org/#/c/72565/
  because ml2 plguin does not become to open RPC connection at plugin 
initialization.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1300570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300002] [NEW] neutron-db-manage does not work properly when using Metaplugin

2014-03-30 Thread Itsuro Oda
Public bug reported:

neutron-db-manage does not create Neutron DB nor upgrade Neutron DB
properly when using Metaplugin.

The first cause of this problem is that 'active_plugins' parameter
passed to migration scripts includes only metaplugin (i.e. not include
target plugins under metaplugin).

There some problems even if the first cause is fixed.
For example there are multiple scripts which handles an same table (off course 
the target plugin for each script is different, but they may be used at the 
same time under metaplugin).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/132

Title:
  neutron-db-manage does not work properly when using Metaplugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  neutron-db-manage does not create Neutron DB nor upgrade Neutron DB
  properly when using Metaplugin.

  The first cause of this problem is that 'active_plugins' parameter
  passed to migration scripts includes only metaplugin (i.e. not include
  target plugins under metaplugin).

  There some problems even if the first cause is fixed.
  For example there are multiple scripts which handles an same table (off 
course the target plugin for each script is different, but they may be used at 
the same time under metaplugin).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285993] [NEW] neutron-server crashes when running on an empty database

2014-02-27 Thread Itsuro Oda
Public bug reported:

operation:
$ mysql
 create database neutron_ml2 character set utf8;
 exit
$ neutron-server --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini

error:
2014-02-28 15:02:46.550 TRACE neutron ProgrammingError: (ProgrammingError) 
(1146, Table 'neutron_ml2.ml2_vlan_allocations' doesn't exist) 'SELECT 
ml2_vlan_allocations.physical_network AS ml2_vlan_allocations_physical_network, 
ml2_vlan_allocations.vlan_id AS ml2_vlan_allocations_vlan_id, 
ml2_vlan_allocations.allocated AS ml2_vlan_allocations_allocated \nFROM 
ml2_vlan_allocations FOR UPDATE' ()

investigation:
This problem introduced by https://review.openstack.org/#/c/74896/ .

Not that this problem does not occur if nuetron-db-manage is run before
running neutron-server since ml2_vlan_allocations table is created by
neutron-db-manage.

I did skip running neutron-db-manage usually and it was no problem. Is
it prohibited now ?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285993

Title:
  neutron-server crashes when running on an empty database

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  operation:
  $ mysql
   create database neutron_ml2 character set utf8;
   exit
  $ neutron-server --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini

  error:
  2014-02-28 15:02:46.550 TRACE neutron ProgrammingError: (ProgrammingError) 
(1146, Table 'neutron_ml2.ml2_vlan_allocations' doesn't exist) 'SELECT 
ml2_vlan_allocations.physical_network AS ml2_vlan_allocations_physical_network, 
ml2_vlan_allocations.vlan_id AS ml2_vlan_allocations_vlan_id, 
ml2_vlan_allocations.allocated AS ml2_vlan_allocations_allocated \nFROM 
ml2_vlan_allocations FOR UPDATE' ()

  investigation:
  This problem introduced by https://review.openstack.org/#/c/74896/ .

  Not that this problem does not occur if nuetron-db-manage is run
  before running neutron-server since ml2_vlan_allocations table is
  created by neutron-db-manage.

  I did skip running neutron-db-manage usually and it was no problem. Is
  it prohibited now ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1285993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267290] [NEW] Net-list is very slow under metaplugin

2014-01-08 Thread Itsuro Oda
Public bug reported:

If there are many networks when using metaplugin, net-list (GET networks API)
takes very long time.
For example: (showing hardware spec etc. is omitted since it is relative 
comparison.)

--- 200 networks, openvswitch plugin used natively
$ time neutron net-list
...snip
real0m2.007s
user0m0.428s
sys 0m0.100s
---

--- 200 openvswitch networks, under metaplugin
$ time neutron net-list
...snip
real0m7.700s
user0m0.472s
sys 0m0.072s
---
Note that the quantum-server wastes a lot of cpu usage too.

** Affects: neutron
 Importance: Undecided
 Assignee: Itsuro Oda (oda-g)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1267290

Title:
  Net-list is very slow under metaplugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If there are many networks when using metaplugin, net-list (GET networks API)
  takes very long time.
  For example: (showing hardware spec etc. is omitted since it is relative 
comparison.)

  --- 200 networks, openvswitch plugin used natively
  $ time neutron net-list
  ...snip
  real0m2.007s
  user0m0.428s
  sys 0m0.100s
  ---

  --- 200 openvswitch networks, under metaplugin
  $ time neutron net-list
  ...snip
  real0m7.700s
  user0m0.472s
  sys 0m0.072s
  ---
  Note that the quantum-server wastes a lot of cpu usage too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1267290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267291] [NEW] DB lock wait timeout is possible for metaplugin

2014-01-08 Thread Itsuro Oda
Public bug reported:

There are some places that a target plugin's method is called in 
'with context.session.begin(subtransaction=True)'.
This causes 'lock wait timeout' error potentially. 
--- error is like:
OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded; try 
restarting transaction') ...snip
---

For example say ml2 is target plugin. Ml2's mechanism drivers separates 
precommit and postcommit
method so that 'lock wait timeout' error does not occur. 
But it is meaningless if ml2 is used under metaplugin.

** Affects: neutron
 Importance: Undecided
 Assignee: Itsuro Oda (oda-g)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Itsuro Oda (oda-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1267291

Title:
  DB lock wait timeout is possible for metaplugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There are some places that a target plugin's method is called in 
  'with context.session.begin(subtransaction=True)'.
  This causes 'lock wait timeout' error potentially. 
  --- error is like:
  OperationalError: (OperationalError) (1205, 'Lock wait timeout exceeded; try 
restarting transaction') ...snip
  ---

  For example say ml2 is target plugin. Ml2's mechanism drivers separates 
precommit and postcommit
  method so that 'lock wait timeout' error does not occur. 
  But it is meaningless if ml2 is used under metaplugin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1267291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267330] [NEW] RPC callback failure happens in metaplugin

2014-01-08 Thread Itsuro Oda
Public bug reported:

Each target plugin under metaplugin opens topics.PLUGIN (q-plugin) consumer.
RPC call from an agent is recieved one of target plugin randomly.
Fortunately most of RPC callbacks are common for plugins but if an RPC is not 
supported by the received plugin, the following error occurs for example.

---
ERROR neutron.openstack.common.rpc.amqp 
[req-be2e4111-3b81-46eb-9200-f4ff7d75984a None None] Exception during message 
handling
TRACE neutron.openstack.common.rpc.amqp Traceback (most recent call last):
TRACE neutron.openstack.common.rpc.amqp   File 
/usr/local/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py, 
line 438, in _process_data
TRACE neutron.openstack.common.rpc.amqp **args)
TRACE neutron.openstack.common.rpc.amqp   File 
/usr/local/lib/python2.7/dist-packages/neutron/common/rpc.py, line 45, in 
dispatch
TRACE neutron.openstack.common.rpc.amqp neutron_ctxt, version, method, 
namespace, **kwargs)
TRACE neutron.openstack.common.rpc.amqp   File 
/usr/local/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py,
 line 176, in dispatch
TRACE neutron.openstack.common.rpc.amqp raise AttributeError(No such RPC 
function '%s' % method)
TRACE neutron.openstack.common.rpc.amqp AttributeError: No such RPC function 
'security_group_rules_for_devices'
---

** Affects: neutron
 Importance: Undecided
 Assignee: Itsuro Oda (oda-g)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Itsuro Oda (oda-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1267330

Title:
  RPC callback failure happens in metaplugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Each target plugin under metaplugin opens topics.PLUGIN (q-plugin) consumer.
  RPC call from an agent is recieved one of target plugin randomly.
  Fortunately most of RPC callbacks are common for plugins but if an RPC is not 
supported by the received plugin, the following error occurs for example.

  ---
  ERROR neutron.openstack.common.rpc.amqp 
[req-be2e4111-3b81-46eb-9200-f4ff7d75984a None None] Exception during message 
handling
  TRACE neutron.openstack.common.rpc.amqp Traceback (most recent call last):
  TRACE neutron.openstack.common.rpc.amqp   File 
/usr/local/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py, 
line 438, in _process_data
  TRACE neutron.openstack.common.rpc.amqp **args)
  TRACE neutron.openstack.common.rpc.amqp   File 
/usr/local/lib/python2.7/dist-packages/neutron/common/rpc.py, line 45, in 
dispatch
  TRACE neutron.openstack.common.rpc.amqp neutron_ctxt, version, method, 
namespace, **kwargs)
  TRACE neutron.openstack.common.rpc.amqp   File 
/usr/local/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py,
 line 176, in dispatch
  TRACE neutron.openstack.common.rpc.amqp raise AttributeError(No such RPC 
function '%s' % method)
  TRACE neutron.openstack.common.rpc.amqp AttributeError: No such RPC function 
'security_group_rules_for_devices'
  ---

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1267330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266954] [NEW] tempest failure: test_list_agent failes with MismatchError

2014-01-07 Thread Itsuro Oda
Public bug reported:

Jenkins job 'check-tempest-dsvm-neutron-pg-isolated' failed when I posted a fix.
(see. 
http://logs.openstack.org/34/65034/1/check/check-tempest-dsvm-neutron-pg-isolated/d3e37ea
 )

Failed test is as follows:
---
 FAIL: 
tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_list_agent[gate,smoke]
 
tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_list_agent[gate,smoke]

 Traceback (most recent call last):
   File tempest/api/network/admin/test_agent_management.py, line 37, in 
test_list_agent
 self.assertIn(self.agent, agents)
   File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
330, in assertIn
 self.assertThat(haystack, Contains(needle))
   File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
414, in assertThat
 raise MismatchError(matchee, matcher, mismatch, verbose)
 MismatchError: {u'binary': u'neutron-dhcp-agent', u'description': None, ... 
snip
---

I found the mismatch is only 'heartbeat_timestamp'.
expected: u'heartbeat_timestamp': u'2014-01-06 06:52:48.291601'
real: u'heartbeat_timestamp': u'2014-01-06 06:52:52.309330'

In test_list_agent GET API issues twice and compare results.
It is possible to differ heartbeat_timestam isn't it?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1266954

Title:
  tempest failure: test_list_agent failes with MismatchError

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Jenkins job 'check-tempest-dsvm-neutron-pg-isolated' failed when I posted a 
fix.
  (see. 
http://logs.openstack.org/34/65034/1/check/check-tempest-dsvm-neutron-pg-isolated/d3e37ea
 )

  Failed test is as follows:
  ---
   FAIL: 
tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_list_agent[gate,smoke]
   
tempest.api.network.admin.test_agent_management.AgentManagementTestJSON.test_list_agent[gate,smoke]

   Traceback (most recent call last):
 File tempest/api/network/admin/test_agent_management.py, line 37, in 
test_list_agent
   self.assertIn(self.agent, agents)
 File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
330, in assertIn
   self.assertThat(haystack, Contains(needle))
 File /usr/local/lib/python2.7/dist-packages/testtools/testcase.py, line 
414, in assertThat
   raise MismatchError(matchee, matcher, mismatch, verbose)
   MismatchError: {u'binary': u'neutron-dhcp-agent', u'description': None, ... 
snip
  ---

  I found the mismatch is only 'heartbeat_timestamp'.
  expected: u'heartbeat_timestamp': u'2014-01-06 06:52:48.291601'
  real: u'heartbeat_timestamp': u'2014-01-06 06:52:52.309330'

  In test_list_agent GET API issues twice and compare results.
  It is possible to differ heartbeat_timestam isn't it?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1266954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266347] [NEW] Metaplugin can not be used with router service-type plugin

2014-01-05 Thread Itsuro Oda
Public bug reported:

When the metaplugin is selected as the core plugin and a router service-
type plugin is specified at the same time (see below), quantum-server
crashes with the following error.

--- neutron.conf ---
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
core_plugin = neutron.plugins.metaplugin.meta_neutron_plugin.MetaPluginV2
---

--- error log ---
2013-12-19 16:07:21.282 TRACE neutron   File 
/opt/stack/neutron/neutron/manager.py, line 176, in _load_service_plugins
2013-12-19 16:07:21.282 TRACE neutron plugin_inst.get_plugin_type())
2013-12-19 16:07:21.282 TRACE neutron ValueError: (u'Multiple plugins for 
service %s were configured', 'L3_ROUTER_NAT')
---

Core plugins are going to not support the router extension. The
metaplugin should be able to be used with a router service-type plugin.

** Affects: neutron
 Importance: Undecided
 Assignee: Itsuro Oda (oda-g)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Itsuro Oda (oda-g)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1266347

Title:
  Metaplugin can not be used with router service-type plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When the metaplugin is selected as the core plugin and a router
  service-type plugin is specified at the same time (see below),
  quantum-server crashes with the following error.

  --- neutron.conf ---
  service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
  core_plugin = neutron.plugins.metaplugin.meta_neutron_plugin.MetaPluginV2
  ---

  --- error log ---
  2013-12-19 16:07:21.282 TRACE neutron   File 
/opt/stack/neutron/neutron/manager.py, line 176, in _load_service_plugins
  2013-12-19 16:07:21.282 TRACE neutron plugin_inst.get_plugin_type())
  2013-12-19 16:07:21.282 TRACE neutron ValueError: (u'Multiple plugins for 
service %s were configured', 'L3_ROUTER_NAT')
  ---

  Core plugins are going to not support the router extension. The
  metaplugin should be able to be used with a router service-type
  plugin.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1266347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp