[Yahoo-eng-team] [Bug 1816747] [NEW] Neutron neutron-db-manage failures with openstack master/stein branch

2019-02-20 Thread Romil Gupta
Public bug reported:

Neutron plugin: NSX Policy Plugin

We are bringing up the NSX Policy plugin setup with master branch and as
part of Openstack installation, it failed with the error as mentioned
below:

changed: [osctrl01] => (item=nova-manage db sync)
failed: [osctrl01] (item=neutron-db-manage --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/vmware/nsx.ini 
upgrade head) => {"changed": true, "cmd": ["neutron-db-manage", 
"--config-file", "/etc/neutron/neutron.conf", "--config-file", 
"/etc/neutron/plugins/vmware/nsx.ini", "upgrade", "head"], "delta": 
"0:01:08.695483", "end": "2019-02-18 19:00:42.339996", "failed": true, "item": 
"neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file 
/etc/neutron/plugins/vmware/nsx.ini upgrade head", "rc": 1, "start": 
"2019-02-18 18:59:33.644513", "stderr": 
"/opt/mhos/python/lib/python2.7/site-packages/psycopg2/__init__.py:144: 
UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in 
order to keep installing from binary please use \"pip install psycopg2-binary\" 
instead. For details see: 
.\n  
\"\"\")\nINFO  [alembic.runtime.migration] Context impl MySQLImpl.\nINFO  
[alembic.runtime.migration] Will assume non-transactional DDL.\nINFO  
[alembic.runtime.migration] Context impl MySQLImpl.\nINFO  
[alembic.runtime.migration] Will assume non-transactional DDL.\nINFO  
[alembic.runtime.migration] Running upgrade  -> kilo\nINFO  
[alembic.runtime.migration] Running upgrade kilo -> 354db87e3225\nINFO  
[alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151\nINFO  
[alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf\nINFO  
[alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee\nINFO  
[alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f\nINFO  
[alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773\nINFO  
[alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592\nINFO  
[alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7\nINFO  
[alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79\nINFO  
[alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051\nINFO  
[alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136\nINFO  
[alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59\nINFO  
[alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d\nINFO  
[alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a\nINFO  
[alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25\nINFO  
[alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee\nINFO  
[alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9\nINFO  
[alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4\nINFO  
[alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664\nINFO  
[alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5\nINFO  
[alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f\nINFO  
[alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821\nINFO  
[alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4\nINFO  
[alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81\nINFO  
[alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6\nINFO  
[alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532\nINFO  
[alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f\nINFO  
[alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a\nINFO  
[alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b\nINFO  
[alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73\nINFO  
[alembic.runtime.migration] Running upgrade kilo -> 30018084ec99\nINFO  
[alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada\nINFO  
[alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016\nINFO  
[alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3\nINFO  
[alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d\nINFO  
[alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d\nINFO  
[alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297\nINFO  
[alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c\nINFO  
[alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39\nINFO  
[alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b\nINFO  
[alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050\nINFO  
[alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9\nINFO  
[alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada\nINFO  

[Yahoo-eng-team] [Bug 1435852] Re: Use first() instead of one() in tunnel endpoint query

2016-03-01 Thread Romil Gupta
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435852

Title:
  Use first() instead of one() in tunnel endpoint query

Status in neutron:
  Invalid

Bug description:
  Consider running neutron-server in the HA mode, Thread A is trying to delete 
the endpoint for tunnel_ip=10.0.0.2. 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_tunnel.py#L243
  whereas, Thread B is trying to add the endpoint for tunnel_ip=10.0.0.2 which 
is already existing so it will fall in except db_exc.DBDuplicateEntry and look 
for ip_address. But Thread A could possibly delete it since both threads are 
async. In that case, the query will raise an exception if we use one() instead 
of first().

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1435852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473909] Re: Error message during nova delete (Esxi based devstack setup using ovsvapp sollution)

2015-11-04 Thread Romil Gupta
** Tags added: networking-vsphere

** Project changed: nova => networking-vsphere

** Changed in: networking-vsphere
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1473909

Title:
  Error message during nova delete (Esxi based devstack setup using
  ovsvapp sollution)

Status in networking-vsphere:
  Confirmed

Bug description:
  I am trying 
"https://github.com/openstack/networking-vsphere/tree/master/devstack; for 
OVSvApp solution , consisting 3 DVS.
  1. Trunk DVS
  2. Management DVS
  3. Uplink DVS

  I am using Esxi based devstack setup using vCenter. Also I am working
  with stable kilo.

  I could successfully boot an instance using nova boot.

  When I delete same instance using nova delete, API request is successful.
  VM deletes after a long. But in the mean time following error occurs - 



  
  2015-07-13 21:53:44.193 ERROR nova.network.base_api 
[req-760e73b5-9815-441d-931e-c0a57f8d32f3 None None] [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] Failed storing info cache
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] Traceback (most recent call last):
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/opt/stack/nova/nova/network/base_api.py", line 49, in 
update_instance_cache_with_nw_info
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] ic.save(update_cells=update_cells)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/opt/stack/nova/nova/objects/base.py", line 192, in wrapper
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] self._context, self, fn.__name__, 
args, kwargs)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/opt/stack/nova/nova/conductor/rpcapi.py", line 340, in object_action
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] objmethod=objmethod, args=args, 
kwargs=kwargs)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 
156, in call
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] retry=self.retry)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 90, 
in _send
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] timeout=timeout, retry=retry)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 350, in send
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] retry=retry)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 341, in _send
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] raise result
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] InstanceInfoCacheNotFound_Remote: Info 
cache for instance 06a3de55-285d-4d0d-953e-7f99aed28e95 could not be found.
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] Traceback (most recent call last):
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/opt/stack/nova/nova/conductor/manager.py", line 422, in _object_dispatch
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] return getattr(target, method)(*args, 
**kwargs)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]   File 
"/opt/stack/nova/nova/objects/base.py", line 208, in wrapper
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95] return fn(self, *args, **kwargs)
  2015-07-13 21:53:44.193 TRACE nova.network.base_api [instance: 
06a3de55-285d-4d0d-953e-7f99aed28e95]
  2015-07-13 

[Yahoo-eng-team] [Bug 1512328] [NEW] Postcommit network and port methods not available in ovsvapp mech_driver

2015-11-02 Thread Romil Gupta
Public bug reported:

In the liberty cycle, we have implemented these methods in
ovsvapp_driver.py sitting in openstack/networking-vsphere

def delete_network_postcommit(self, context):
def create_port_postcommit(self, context):
def update_port_postcommit(self, context):
def delete_port_postcommit(self, context):

All these methods needs to be called by ovsvapp shim  mech_driver from
neutron.

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: In Progress


** Tags: ml2 networking-vsphere

** Tags added: networking-vsphere

** Tags added: ml2

** Summary changed:

- Postcommit network nad port hooks not available in ovsvapp mech_driver
+ Postcommit network and port methods not available in ovsvapp mech_driver

** Changed in: neutron
 Assignee: (unassigned) => Romil Gupta (romilg)

** Changed in: neutron
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1512328

Title:
  Postcommit network and port methods not available in ovsvapp
  mech_driver

Status in neutron:
  In Progress

Bug description:
  In the liberty cycle, we have implemented these methods in
  ovsvapp_driver.py sitting in openstack/networking-vsphere

  def delete_network_postcommit(self, context):
  def create_port_postcommit(self, context):
  def update_port_postcommit(self, context):
  def delete_port_postcommit(self, context):

  All these methods needs to be called by ovsvapp shim  mech_driver from
  neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1512328/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507684] [NEW] Unable to establish tunnel across hypervisor

2015-10-19 Thread Romil Gupta
Public bug reported:


As part of networking-vsphere project which runs ovsvapp agent
on each ESXi host inside service VM, which talk to neutron-server
having l2pop enabled in a multi-hypervisor mode like KVM, ESXi.
The tunnels are not getting established between KVM compute node
and ESXi host. The l2pop mech_driver needs to embrace ovsvapp agent
to form the tunnels.

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507684

Title:
  Unable to establish tunnel across hypervisor

Status in neutron:
  In Progress

Bug description:
  
  As part of networking-vsphere project which runs ovsvapp agent
  on each ESXi host inside service VM, which talk to neutron-server
  having l2pop enabled in a multi-hypervisor mode like KVM, ESXi.
  The tunnels are not getting established between KVM compute node
  and ESXi host. The l2pop mech_driver needs to embrace ovsvapp agent
  to form the tunnels.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507684/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441043] [NEW] Move values for network_type to plugins.common.constants.py

2015-04-07 Thread Romil Gupta
Public bug reported:

It is quite confusing to have values for network type in common.constants.py
instead of having in plugins.common.constants.py.

Currently, the plugins/common/constants.py consists network_type constants
like VLAN, VXLAN, GRE etc. but values for network type like ranges
are defined in common.constants.py which is not good, it is better to have
both things at the same place.

Also, it's better to move out few methods which are predominantly used in 
plugins
from common.utils.py to plugins.common.utils.py.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441043

Title:
  Move values for network_type to plugins.common.constants.py

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It is quite confusing to have values for network type in common.constants.py
  instead of having in plugins.common.constants.py.

  Currently, the plugins/common/constants.py consists network_type constants
  like VLAN, VXLAN, GRE etc. but values for network type like ranges
  are defined in common.constants.py which is not good, it is better to have
  both things at the same place.

  Also, it's better to move out few methods which are predominantly used in 
plugins
  from common.utils.py to plugins.common.utils.py.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1441043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1435852] [NEW] Use first() instead of one() in tunnel endpoint query

2015-03-24 Thread Romil Gupta
Public bug reported:

Consider running neutron-server in the HA mode, Thread A is trying to delete 
the endpoint for tunnel_ip=10.0.0.2. 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_tunnel.py#L243
whereas, Thread B is trying to add the endpoint for tunnel_ip=10.0.0.2 which is 
already existing so it will fall in except db_exc.DBDuplicateEntry and look for 
ip_address. But Thread A could possibly delete it since both threads are async. 
In that case, the query will raise an exception if we use one() instead of 
first().

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1435852

Title:
  Use first() instead of one() in tunnel endpoint query

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Consider running neutron-server in the HA mode, Thread A is trying to delete 
the endpoint for tunnel_ip=10.0.0.2. 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/type_tunnel.py#L243
  whereas, Thread B is trying to add the endpoint for tunnel_ip=10.0.0.2 which 
is already existing so it will fall in except db_exc.DBDuplicateEntry and look 
for ip_address. But Thread A could possibly delete it since both threads are 
async. In that case, the query will raise an exception if we use one() instead 
of first().

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1435852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426427] Re: Improve tunnel_sync server side rpc to handle race conditions

2015-03-01 Thread Romil Gupta
** Description changed:

- We  have a concern that we may end up with db conflict errors due to
- multiple parallel requests incoming.
+ We  have a concern that we may have race conditions with the following
+ code snippet:
  
- Consider two threads (A and B), each receiving tunnel_sync with host set to 
HOST1 and HOST2. The race scenario is:
- A checks whether tunnel exists and receives nothing.
- B checks whether tunnel exists and receives nothing.
- A adds endpoint with HOST1.
- B adds endpoint with HOST2.
+ if host:
+ host_endpoint = driver.obj.get_endpoint_by_host(host)
+ ip_endpoint = driver.obj.get_endpoint_by_ip(tunnel_ip)
  
- Now we have two endpoints for the same IP address with different hosts (I 
guess that is not what we would expect).
- I think the only way to avoid it is check for tunnel existence under the same 
transaction that will update it, if present. Probably meaning, making 
add_endpoint aware of potential tunnel existence.
+ if (ip_endpoint and ip_endpoint.host is None
+ and host_endpoint is None):
+ driver.obj.delete_endpoint(ip_endpoint.ip_address)
+ elif (ip_endpoint and ip_endpoint.host != host):
+ msg = (_(Tunnel IP %(ip)s in use with host %(host)s),
+{'ip': ip_endpoint.ip_address,
+ 'host': ip_endpoint.host})
+ raise exc.InvalidInput(error_message=msg)
+ elif (host_endpoint and host_endpoint.ip_address != 
tunnel_ip):
+ # Notify all other listening agents to delete stale 
tunnels
+ self._notifier.tunnel_delete(rpc_context,
+ host_endpoint.ip_address, tunnel_type)
+ driver.obj.delete_endpoint(host_endpoint.ip_address)
+ 
+ Consider two threads (A and B), where for
+ 
+ Thread A we have following use case:
+ if Host is passed from an agent and it is not found in DB but the passed 
tunnel_ip is found, delete the endpoint from DB and add the endpoint with 
+ (tunnel_ip, host), it's an upgrade case.
+ 
+ whereas for Thread B we have following use case:
+ if passed host and tunnel_ip are not found in the DB, it is a new endpoint.
+ 
+ Both threads will do the following in the end:
+ 
+ tunnel = driver.obj.add_endpoint(tunnel_ip, host)
+ tunnels = driver.obj.get_endpoints()
+ entry = {'tunnels': tunnels}
+ # Notify all other listening agents
+ self._notifier.tunnel_update(rpc_context, tunnel.ip_address,
+  tunnel_type)
+ # Return the list of tunnels IP's to the agent
+ return entry
+ 
+ 
+ Since, Thread A first deletes the endpoints and adds it, we may have chances 
where Thread B doesn't get that endpoint in get_endpoints call during race 
condition.
+ 
+ One way to overcome this problem would be instead of doing
+ delete_endpoint we could introduce update_endpoint method in
+ type_drivers.

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

** Changed in: neutron
   Status: Invalid = New

** Tags added: ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426427

Title:
  Improve tunnel_sync server side rpc to handle race conditions

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We  have a concern that we may have race conditions with the following
  code snippet:

  if host:
  host_endpoint = driver.obj.get_endpoint_by_host(host)
  ip_endpoint = driver.obj.get_endpoint_by_ip(tunnel_ip)

  if (ip_endpoint and ip_endpoint.host is None
  and host_endpoint is None):
  driver.obj.delete_endpoint(ip_endpoint.ip_address)
  elif (ip_endpoint and ip_endpoint.host != host):
  msg = (_(Tunnel IP %(ip)s in use with host %(host)s),
 {'ip': ip_endpoint.ip_address,
  'host': ip_endpoint.host})
  raise exc.InvalidInput(error_message=msg)
  elif (host_endpoint and host_endpoint.ip_address != 
tunnel_ip):
  # Notify all other listening agents to delete stale 
tunnels
  self._notifier.tunnel_delete(rpc_context,
  host_endpoint.ip_address, tunnel_type)
  driver.obj.delete_endpoint(host_endpoint.ip_address)

  Consider two threads (A and B), where for

  Thread A we have following use case:
  if Host is passed from an agent and it is not found in DB but the passed 
tunnel_ip is found, delete the endpoint from DB and add the endpoint with 
  (tunnel_ip, host), it's an upgrade case.

  whereas

[Yahoo-eng-team] [Bug 1426904] [NEW] Can't start Neutron service when max VNI value is defined in ml2_conf.ini file

2015-03-01 Thread Romil Gupta
/util.py, line 55, in 
fix_call
2015-02-28 17:11:33.793 TRACE neutron val = callable(*args, **kw)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/api/v2/router.py, line 69, in factory
2015-02-28 17:11:33.793 TRACE neutron return cls(**local_config)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/api/v2/router.py, line 73, in __init__
2015-02-28 17:11:33.793 TRACE neutron plugin = 
manager.NeutronManager.get_plugin()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/manager.py, line 219, in get_plugin
2015-02-28 17:11:33.793 TRACE neutron return 
weakref.proxy(cls.get_instance().plugin)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/manager.py, line 213, in get_instance
2015-02-28 17:11:33.793 TRACE neutron cls._create_instance()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/openstack/common/lockutils.py, line 272, in inner
2015-02-28 17:11:33.793 TRACE neutron return f(*args, **kwargs)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/manager.py, line 199, in _create_instance
2015-02-28 17:11:33.793 TRACE neutron cls._instance = cls()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/manager.py, line 114, in __init__
2015-02-28 17:11:33.793 TRACE neutron plugin_provider)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/manager.py, line 140, in _get_plugin_instance
2015-02-28 17:11:33.793 TRACE neutron return plugin_class()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py, line 126, in __init__
2015-02-28 17:11:33.793 TRACE neutron self.type_manager.initialize()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/plugins/ml2/managers.py, line 152, in initialize
2015-02-28 17:11:33.793 TRACE neutron driver.obj.initialize()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/plugins/ml2/drivers/type_vxlan.py, line 80, in 
initialize
2015-02-28 17:11:33.793 TRACE neutron 
self._initialize(cfg.CONF.ml2_type_vxlan.vni_ranges)
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/plugins/ml2/drivers/type_tunnel.py, line 65, in 
_initialize
2015-02-28 17:11:33.793 TRACE neutron self.sync_allocations()
2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/plugins/ml2/drivers/type_vxlan.py, line 96, in 
sync_allocations
2015-02-28 17:11:33.793 TRACE neutron vxlan_vnis |= 
set(moves.xrange(tun_min, tun_max + 1))
2015-02-28 17:11:33.793 TRACE neutron MemoryError
2015-02-28 17:11:33.793 TRACE neutron

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: New

** Project changed: python-neutronclient = neutron

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426904

Title:
  Can't start Neutron service when max VNI value is defined in
  ml2_conf.ini file

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The VNI value is 24bits, i.e. 1 – 16777215

  When I defined the max value in the ml2_conf.ini file:
  [ml2_type_vxlan]
  vni_ranges = 1001:16777215

  Neutron service fails to start.

  2015-02-28 17:11:33.793 CRITICAL neutron [-] MemoryError

  2015-02-28 17:11:33.793 TRACE neutron Traceback (most recent call last):
  2015-02-28 17:11:33.793 TRACE neutron   File /usr/local/bin/neutron-server, 
line 9, in module
  2015-02-28 17:11:33.793 TRACE neutron 
load_entry_point('neutron==2014.2.2.dev438', 'console_scripts', 
'neutron-server')()
  2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/server/__init__.py, line 48, in main
  2015-02-28 17:11:33.793 TRACE neutron neutron_api = 
service.serve_wsgi(service.NeutronApiService)
  2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 105, in serve_wsgi
  2015-02-28 17:11:33.793 TRACE neutron LOG.exception(_('Unrecoverable 
error: please check log '
  2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/openstack/common/excutils.py, line 82, in __exit__
  2015-02-28 17:11:33.793 TRACE neutron six.reraise(self.type_, self.value, 
self.tb)
  2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 102, in serve_wsgi
  2015-02-28 17:11:33.793 TRACE neutron service.start()
  2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 73, in start
  2015-02-28 17:11:33.793 TRACE neutron self.wsgi_app = 
_run_wsgi(self.app_name)
  2015-02-28 17:11:33.793 TRACE neutron   File 
/opt/stack/neutron/neutron/service.py, line 168, in _run_wsgi
  2015-02-28 17:11:33.793 TRACE neutron app

[Yahoo-eng-team] [Bug 1426365] [NEW] Ml2 Mechanism Driver for OVSvApp Solution

2015-02-27 Thread Romil Gupta
Public bug reported:

With the introduction of stackforge/networking-vsphere project which
includes the OVSvApp L2 agent for doing vsphere networking  using
neutron.

We need to have thin mechanism driver in neutron which integrates the
ml2 plugin with the  OVSvApp L2 Agent.

The mechanism driver implements the abstract method given in
mech_agent.SimpleAgentMechanismDriverBase and contains the RPC's
specific to OVSvApp L2 Agent.

** Affects: neutron
 Importance: Wishlist
 Assignee: Romil Gupta (romilg)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426365

Title:
  Ml2 Mechanism Driver for OVSvApp Solution

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  With the introduction of stackforge/networking-vsphere project which
  includes the OVSvApp L2 agent for doing vsphere networking  using
  neutron.

  We need to have thin mechanism driver in neutron which integrates the
  ml2 plugin with the  OVSvApp L2 Agent.

  The mechanism driver implements the abstract method given in
  mech_agent.SimpleAgentMechanismDriverBase and contains the RPC's
  specific to OVSvApp L2 Agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426427] Re: Improve tunnel_sync server side rpc

2015-02-27 Thread Romil Gupta
** Changed in: neutron
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426427

Title:
  Improve tunnel_sync server side rpc

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  We  have a concern that we may end up with db conflict errors due to
  multiple parallel requests incoming.

  Consider two threads (A and B), each receiving tunnel_sync with host set to 
HOST1 and HOST2. The race scenario is:
  A checks whether tunnel exists and receives nothing.
  B checks whether tunnel exists and receives nothing.
  A adds endpoint with HOST1.
  B adds endpoint with HOST2.

  Now we have two endpoints for the same IP address with different hosts (I 
guess that is not what we would expect).
  I think the only way to avoid it is check for tunnel existence under the same 
transaction that will update it, if present. Probably meaning, making 
add_endpoint aware of potential tunnel existence.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426427] [NEW] Improve tunnel_sync server side rpc

2015-02-27 Thread Romil Gupta
Public bug reported:

We  have a concern that we may end up with db conflict errors due to
multiple parallel requests incoming.

Consider two threads (A and B), each receiving tunnel_sync with host set to 
HOST1 and HOST2. The race scenario is:
A checks whether tunnel exists and receives nothing.
B checks whether tunnel exists and receives nothing.
A adds endpoint with HOST1.
B adds endpoint with HOST2.

Now we have two endpoints for the same IP address with different hosts (I guess 
that is not what we would expect).
I think the only way to avoid it is check for tunnel existence under the same 
transaction that will update it, if present. Probably meaning, making 
add_endpoint aware of potential tunnel existence.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426427

Title:
  Improve tunnel_sync server side rpc

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We  have a concern that we may end up with db conflict errors due to
  multiple parallel requests incoming.

  Consider two threads (A and B), each receiving tunnel_sync with host set to 
HOST1 and HOST2. The race scenario is:
  A checks whether tunnel exists and receives nothing.
  B checks whether tunnel exists and receives nothing.
  A adds endpoint with HOST1.
  B adds endpoint with HOST2.

  Now we have two endpoints for the same IP address with different hosts (I 
guess that is not what we would expect).
  I think the only way to avoid it is check for tunnel existence under the same 
transaction that will update it, if present. Probably meaning, making 
add_endpoint aware of potential tunnel existence.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426427/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332067] Re: neutron server should quit without a valid mechanism_drivers setting

2015-02-10 Thread Romil Gupta
We are reading this config item as:

cfg.ListOpt('mechanism_drivers',
default=[],
help=_(An ordered list of networking mechanism driver 
   entry points to be loaded from the 
   neutron.ml2.mechanism_drivers namespace.)),

which states by defaults its [] hence neutron-server keeps running.

I don't feel we could do anything in the  code to validate the config is
correctly spelled or has some typo mistake.

Anyway, It is well documented in OpenStack-manuals what needs to be
configured in ml2_conf.ini.


Hence, marking this bug as invalid.

** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1332067

Title:
  neutron server should quit without a valid mechanism_drivers setting

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I've made a typo while setting up my environment, accidentally writing
  mechanism_driver = openvswitch instead of mechanism_drivers =
  openvswitch, which caused neutron to start without any warnings but
  port binding fails instantly thus cannot boot VMs.

  Should we throw an ERROR while starting and quit, or should we throw
  an ERROR in the port_binding code?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1332067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315412] Re: tenant_network_type = vxlan has no effect

2015-02-10 Thread Romil Gupta
As mentioned in the comment above, changing the status to fix released.

** Changed in: neutron
   Status: Confirmed = Fix Committed

** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1315412

Title:
  tenant_network_type = vxlan has no effect

Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  I am using havana on precise with OVS plugin. I tried switching from GRE to 
VXLAN but config change made no difference. I tried redeploying neutron and 
compute nodes, this made no difference either, I could still see GRE tunnels in 
place:
  Port gre-3
  Interface gre-3
  type: gre
  options: {in_key=flow, local_ip=y.y.y.y, out_key=flow, 
remote_ip=x.x.x.x}

  On neutron gateways /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini 
looks like this:
  [ovs]
  local_ip = xx.xx.xx.xx
  tenant_network_type = vxlan
  enable_tunneling = True
  tunnel_id_ranges = 1:1000

  And on compute hosts:

  [OVS]
  tunnel_id_ranges = 1:1000
  tenant_network_type = vxlan
  enable_tunneling = True
  local_ip = yy.yy.yy.yy

  [DATABASE]
  sql_connection = mysql://user:p...@zz.zz.zz.zz/neutron?charset=utf8
  reconnect_interval = 2
  [AGENT]
  polling_interval = 2

  [SECURITYGROUP]

  neutron-plugin-openvswitch:
Installed: 1:2013.2.3-0ubuntu1~cloud0
Candidate: 1:2013.2.3-0ubuntu1~cloud0
Version table:
   *** 1:2013.2.3-0ubuntu1~cloud0 0
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ 
precise-updates/havana/main amd64 Packages
  100 /var/lib/dpkg/status

  neutron-plugin-openvswitch-agent:
Installed: 1:2013.2.3-0ubuntu1~cloud0
Candidate: 1:2013.2.3-0ubuntu1~cloud0
Version table:
   *** 1:2013.2.3-0ubuntu1~cloud0 0
  500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ 
precise-updates/havana/main amd64 Packages
  100 /var/lib/dpkg/status

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1315412/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1405077] Re: VLAN configuration changes made is not updated until neutron server is restarted

2015-02-08 Thread Romil Gupta
You may need to consider many other configuration parameters like:
[ml2_type_gre]
tunnel_id_ranges = 1:1000

Also, what about if I may need to add type driver and mechanism_driver
in ml2_conf.ini dynamically.

[ml2]
tenant_network_types = vxlan
type_drivers = local,flat,vlan,gre,vxlan
mechanism_drivers = openvswitch,l2population

I feel you need to propose a neutron-specs for the same and ask the
community what needs to be considered for dynamic configurations.

It shouldn't be considered as bug.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1405077

Title:
  VLAN configuration changes made is not updated until neutron server is
  restarted

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I changed network_vlan_ranges configuration in configuration file, and I want 
changes to take effect without restarting the neutron server.
  It may not be a bug, but restarting the networking service itself could lead 
to some critical processes to stop temporarily.

  As some configurations are subjected to change often, automatic
  reloading of configurations without restarting the whole service may
  be a feasible solution.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1405077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386054] [NEW] error message should use gettextutils

2014-10-27 Thread Romil Gupta
Public bug reported:

The existing LOG.error(_() messages should be translated to
LOG.error(_LE().

And every file which contains error messages should have a following
file imported.

from neutron.openstack.common.gettextutils import _LE

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386054

Title:
  error message should use gettextutils

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The existing LOG.error(_() messages should be translated to
  LOG.error(_LE().

  And every file which contains error messages should have a following
  file imported.

  from neutron.openstack.common.gettextutils import _LE

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1386054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386055] [NEW] critical message should use gettextutils

2014-10-27 Thread Romil Gupta
Public bug reported:

The existing LOG.critical(_() messages should be translated to
LOG.critical(_LC().

And every file which contains critical messages should have a following
file imported.

from neutron.openstack.common.gettextutils import _LC

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386055

Title:
  critical message should use gettextutils

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The existing LOG.critical(_() messages should be translated to
  LOG.critical(_LC().

  And every file which contains critical messages should have a
  following file imported.

  from neutron.openstack.common.gettextutils import _LC

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1386055/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386059] [NEW] All LOG message should use gettextutils _LI, _LW, _LE, _LC

2014-10-27 Thread Romil Gupta
Public bug reported:

1. All  existing LOG.info(_() messages should be translated to
LOG.info(_LI().

2. All existing LOG.warning(_() messages should be translated to
LOG.warning(_LW().

3. All existing LOG.error(_() messages should be translated to
LOG.error(_LE().

4. All existing LOG.critical(_() messages should be translated to
LOG.critical(_LC().

And every file which contains LOG messages should have a following file
imported.

from neutron.openstack.common.gettextutils import _LI, _LW, _LE, _LC

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

** Description changed:

  1. All  existing LOG.info(_() messages should be translated to
  LOG.info(_LI().
  
  2. All existing LOG.warning(_() messages should be translated to
  LOG.warning(_LW().
  
  3. All existing LOG.error(_() messages should be translated to
  LOG.error(_LE().
  
- 4.All existing LOG.critical(_() messages should be translated to
+ 4. All existing LOG.critical(_() messages should be translated to
  LOG.critical(_LC().
  
  And every file which contains LOG messages should have a following file
  imported.
  
  from neutron.openstack.common.gettextutils import _LI, _LW, _LE, _LC

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386059

Title:
  All LOG message should use gettextutils _LI, _LW, _LE, _LC

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1. All  existing LOG.info(_() messages should be translated to
  LOG.info(_LI().

  2. All existing LOG.warning(_() messages should be translated to
  LOG.warning(_LW().

  3. All existing LOG.error(_() messages should be translated to
  LOG.error(_LE().

  4. All existing LOG.critical(_() messages should be translated to
  LOG.critical(_LC().

  And every file which contains LOG messages should have a following
  file imported.

  from neutron.openstack.common.gettextutils import _LI, _LW, _LE, _LC

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1386059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384151] [NEW] warning message should use gettextutils

2014-10-22 Thread Romil Gupta
Public bug reported:

The existing LOG.warning(_() messages should be translated to
LOG.warning(_LW().

And every file which contains info messages should have a following file
imported.

from neutron.openstack.common.gettextutils import _LW

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384151

Title:
  warning message should use gettextutils

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The existing LOG.warning(_() messages should be translated to
  LOG.warning(_LW().

  And every file which contains info messages should have a following
  file imported.

  from neutron.openstack.common.gettextutils import _LW

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382152] [NEW] info message should use gettextutils

2014-10-16 Thread Romil Gupta
Public bug reported:


The existing LOG.info(_() messages should be translated to LOG.info(_LI().

And every file which contains info messages should have a following file
imported.

from neutron.openstack.common.gettextutils import _LI

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382152

Title:
  info message should use gettextutils

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  The existing LOG.info(_() messages should be translated to LOG.info(_LI().

  And every file which contains info messages should have a following
  file imported.

  from neutron.openstack.common.gettextutils import _LI

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1382152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381071] [NEW] Unit test not available for TunnelRpcCallbackMixin

2014-10-14 Thread Romil Gupta
Public bug reported:

Currently , there is no unit test available for
TunnelRpcCallbackMixin-- tunnel_sync () method. It needs to be
addressed.

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

** Description changed:

  Currently , there is no unit test available for
  TunnelRpcCallbackMixin-- tunnel_sync () method. It needs to be
- addressed..
+ addressed.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1381071

Title:
  Unit test not available for TunnelRpcCallbackMixin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently , there is no unit test available for
  TunnelRpcCallbackMixin-- tunnel_sync () method. It needs to be
  addressed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1381071/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373359] [NEW] Vxlan UDP port is not updated in db

2014-09-24 Thread Romil Gupta
Public bug reported:

The value for Vxlan UDP port should be  changed in ml2_vxlan_endpoints table 
according to the values configured in ml2_conf.ini at L2 agent side.
   
   VXLAN_UDP_PORT = 4789

def add_endpoint(self, ip, udp_port=VXLAN_UDP_PORT):
LOG.debug(_(add_vxlan_endpoint() called for ip %s), ip)

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373359

Title:
  Vxlan UDP port is not updated in db

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The value for Vxlan UDP port should be  changed in ml2_vxlan_endpoints table 
according to the values configured in ml2_conf.ini at L2 agent side.
 
 VXLAN_UDP_PORT = 4789

  def add_endpoint(self, ip, udp_port=VXLAN_UDP_PORT):
  LOG.debug(_(add_vxlan_endpoint() called for ip %s), ip)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373359/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324436] [NEW] Changing the ml2 network type result in internal server error while performing delete/update operation on the pre existing resources

2014-05-29 Thread Romil Gupta
Public bug reported:

DESCRIPTION: Changing the ml2 network type result in internal server
error while performing delete/update operation on the pre existing
resources

Steps to Reproduce:

1.Configure a ml2 plug-in with network type as vxlan.
 /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,linuxbridge,l2population

2.Create a network .
neutron net-create Net1

+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 4ea83d79-95f4-4a97-bb2e-b8599fa27723 |
| name  | Net1 |
| provider:network_type | vxlan|
| provider:physical_network |  |
| provider:segmentation_id  | 500  |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | e261d031311b484a9ddb177291fab164 |
+---+--+

2. Update the ml2 plugin network type as
vlan./etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch,linuxbridge,l2population

3.Create a network again :
neutron net-create n1
Created a new network:
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| d8593185-9e0f-435f-a96e-3d6deb13c5e4 |
| name  | n1   |
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id  | 3541 |
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| tenant_id | e261d031311b484a9ddb177291fab164 |
+---+--+

4.List the network both the network is listed.

5. Try to delete the vxlan network type network created:
neutron net-delete Net1

Request Failed: internal server error while processing your request.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ml2

** Description changed:

  DESCRIPTION: Changing the ml2 network type result in internal server
  error while performing delete/update operation on the pre existing
  resources
  
  Steps to Reproduce:
  
  1.Configure a ml2 plug-in with network type as vxlan.
-  /etc/neutron/plugins/ml2/ml2_conf.ini
+  /etc/neutron/plugins/ml2/ml2_conf.ini
  [ml2]
  type_drivers = vxlan
  tenant_network_types = vxlan
  mechanism_drivers = openvswitch,linuxbridge,l2population
  
  2.Create a network .
  neutron net-create Net1
  
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| 4ea83d79-95f4-4a97-bb2e-b8599fa27723 |
  | name  | Net1 |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 500  |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tenant_id | e261d031311b484a9ddb177291fab164 |
  +---+--+
- 2. Update the ml2 plugin network type as 
vlan./etc/neutron/plugins/ml2/ml2_conf.ini
+ 
+ 2. Update the ml2 plugin network type as
+ vlan./etc/neutron/plugins/ml2/ml2_conf.ini
  
  [ml2]
  type_drivers = vlan
  tenant_network_types = vlan
  mechanism_drivers = openvswitch,linuxbridge,l2population
+ 
  3.Create a network again :
  neutron net-create n1
  Created a new network: