[Yahoo-eng-team] [Bug 1349895] [NEW] Radware LBaaS driver create extra Neutron Ports

2014-07-29 Thread Avishay Balderman
Public bug reported:

The method ' _create_port_for_pip' should try and check if a Neutron Port 
exists and reuse it.
If the Port is not found it should create a new Port.

See:
https://github.com/openstack/neutron/blob/master/neutron/services/loadbalancer/drivers/radware/driver.py#L616

** Affects: neutron
 Importance: Undecided
 Assignee: Avishay Balderman (avishayb)
 Status: New


** Tags: lbaas radware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1349895

Title:
  Radware LBaaS driver create extra Neutron Ports

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The method ' _create_port_for_pip' should try and check if a Neutron Port 
exists and reuse it.
  If the Port is not found it should create a new Port.

  See:
  
https://github.com/openstack/neutron/blob/master/neutron/services/loadbalancer/drivers/radware/driver.py#L616

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1349895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1327776] [NEW] Logging performance improvment

2014-06-08 Thread Avishay Balderman
Public bug reported:

We have the log decorator that is used in diffrent modules:
https://github.com/openstack/neutron/blob/master/neutron/common/log.py

We can add a check and see if the log is going to be written before we
call it.

def log(method):
Decorator helping to log method calls.
def wrapper(*args, **kwargs):
if LOG.isEnabledFor(logging.DEBUG):
instance = args[0]
data = {class_name: (instance.__class__.__module__ + '.'
   + instance.__class__.__name__),
method_name: method.__name__,
args: args[1:], kwargs: kwargs}
LOG.debug(_('%(class_name)s method %(method_name)s'
' called with arguments %(args)s %(kwargs)s'), data)
return method(*args, **kwargs)
return wrapper


See: https://docs.python.org/2/library/logging.html#logging.Logger.isEnabledFor

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1327776

Title:
  Logging performance improvment

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We have the log decorator that is used in diffrent modules:
  https://github.com/openstack/neutron/blob/master/neutron/common/log.py

  We can add a check and see if the log is going to be written before we
  call it.

  def log(method):
  Decorator helping to log method calls.
  def wrapper(*args, **kwargs):
  if LOG.isEnabledFor(logging.DEBUG):
  instance = args[0]
  data = {class_name: (instance.__class__.__module__ + '.'
 + instance.__class__.__name__),
  method_name: method.__name__,
  args: args[1:], kwargs: kwargs}
  LOG.debug(_('%(class_name)s method %(method_name)s'
  ' called with arguments %(args)s %(kwargs)s'), data)
  return method(*args, **kwargs)
  return wrapper

  
  See: 
https://docs.python.org/2/library/logging.html#logging.Logger.isEnabledFor

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1327776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1324131] [NEW] Radware LBaaS driver should support HA backend

2014-05-28 Thread Avishay Balderman
Public bug reported:

Radware LBaaS driver should be able to work with a backend that was configured 
in HA  mode.
The driver should try and call the other node in the HA pair and see if it is 
active.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas radware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1324131

Title:
  Radware LBaaS driver should support HA backend

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Radware LBaaS driver should be able to work with a backend that was 
configured in HA  mode.
  The driver should try and call the other node in the HA pair and see if it is 
active.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1324131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1317082] [NEW] LBaaS. When a Vip is created enable subnet selection

2014-05-07 Thread Avishay Balderman
Public bug reported:

When a Vip is created the user would like to be able to specify a Subnet for 
the Vip when it is different than the Pool's subnet.
The current implementation uses the Pool Subnet and the user is not able to 
specify a diffrent subnet for the Vip.
It is noted, that the LBaaS API for creating VIP DOES support specifying a 
different Vip subnet.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1317082

Title:
  LBaaS. When a Vip is created enable subnet selection

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a Vip is created the user would like to be able to specify a Subnet for 
the Vip when it is different than the Pool's subnet.
  The current implementation uses the Pool Subnet and the user is not able to 
specify a diffrent subnet for the Vip.
  It is noted, that the LBaaS API for creating VIP DOES support specifying a 
different Vip subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1317082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245892] Re: Radware LBaaS driver: NeutronException usage is faulty

2014-04-03 Thread Avishay Balderman
** Changed in: neutron
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1245892

Title:
  Radware LBaaS driver: NeutronException usage is faulty

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Radware's LBaaS driver is instantiating the NeutronException wrong.

  Example:
  raise n_exc.NeutronException(
  _('params must contain __ids__ field!')

  NeutronException expects kwargs. Passing string will cause error.

  There are 4 places in services/loadbalancer/drivers/radware/driver.py
  module that should be fixed.

  Another problem is the context usage:
  Operations Handling thread is using the same context
  persistent from the start-up time, which is wrong.
  Each operation should use it's own context to complete the operation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1245892/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245888] Re: Removing health monitor associated with a pool does not reflect the change to Radware vDirect system

2014-04-03 Thread Avishay Balderman
the function
 def delete_health_monitor(...) is no longer exists in plugin.py -- invalid

** Changed in: neutron
   Status: Triaged = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1245888

Title:
  Removing health monitor associated with a pool does not reflect the
  change to Radware vDirect system

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Radware is the LBaaS provider.
  Having health monitor associated with a pool.
  Remove the health monitor.
  Removing the health monitor succeeds, but the change is not reflected to 
Radware vDirect system.

  I know that removing HM which is associated to a pool might be forbidden via 
the Horizon,
  but technically, it's possible. 
  HM should be removed and all pools having the association to it should be 
disconnected.

  The bug in the lbaas plugin
  services/loadbalancer/plugin.py 
  Function delete_health_monitor()
  It should change the HM status to PENDING_DELETE before deleting the pool-hm 
association .

  Function after fix:
  def delete_health_monitor(self, context, id):
  with context.session.begin(subtransactions=True):
  hm = self.get_health_monitor(context, id)
  qry = context.session.query(
  ldb.PoolMonitorAssociation
  ).filter_by(monitor_id=id).join(ldb.Pool)
  for assoc in qry:
  driver = self._get_driver_for_pool(context, assoc['pool_id'])

  self.update_pool_health_monitor(context, id, assoc['pool_id'],
  constants.PENDING_DELETE)

  driver.delete_pool_health_monitor(context,
hm,
assoc['pool_id'])
  super(LoadBalancerPlugin, self).delete_health_monitor(context, id)

  I know that health monitor design might be reviewed and changed during 
Icehouse.
  If so, this issue might be not relevant

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1245888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296808] [NEW] Devstack. Fail to boot an instance if more than 1 network is defined

2014-03-24 Thread Avishay Balderman
Public bug reported:

Using Horizon I try to launch an instance with 2 networks defined.

The operation fails with the following error:
ERROR nova.scheduler.filter_scheduler [req-cae61024-6723-4218-bd5e-71b42d181cea 
admin demo] [instance: 54c6f9ba-57e5-4680-bb6b-72eb2da484db] Error from last 
host: devstack-vmware1 (node devstack-vmware1): [u'Traceback (most recent call 
last):
File /opt/stack/nova/nova/compute/manager.py, line 1304, in _build_instance 
set_access_ip=set_access_ip)
File /opt/stack/nova/nova/compute/manager.py, line 394, in decorated_function 
 return function(self, context, *args, **kwargs)
File /opt/stack/nova/nova/compute/manager.py, line 1716, in _spawn 
LOG.exception(_(\'Instance failed to spawn\'), instance=instance)
 
File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in __exit__  
six.reraise(self.type_ self.value, self.tb)
File /opt/stack/nova/nova/compute/manager.py, line 1713, in _spawn 
block_device_info)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 2241, in spawn 
block_device_info)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 3628, in 
_create_domain_and_network network_info)
File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ self.gen.next()
File /opt/stack/nova/nova/compute/manager.py, line 556, in 
wait_for_instance_event actual_event = event.wait() uAttributeError: 
'NoneType' object has no attribute 'wait'\n]

Looks like there is a None event in the events map.

When  I  launch an instance with 1 network defined I face no issues.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296808

Title:
  Devstack. Fail to boot an instance if more than 1 network is defined

Status in OpenStack Compute (Nova):
  New

Bug description:
  Using Horizon I try to launch an instance with 2 networks defined.

  The operation fails with the following error:
  ERROR nova.scheduler.filter_scheduler 
[req-cae61024-6723-4218-bd5e-71b42d181cea admin demo] [instance: 
54c6f9ba-57e5-4680-bb6b-72eb2da484db] Error from last host: devstack-vmware1 
(node devstack-vmware1): [u'Traceback (most recent call last):
  File /opt/stack/nova/nova/compute/manager.py, line 1304, in _build_instance 
set_access_ip=set_access_ip)
  File /opt/stack/nova/nova/compute/manager.py, line 394, in 
decorated_function  return function(self, context, *args, **kwargs)
  File /opt/stack/nova/nova/compute/manager.py, line 1716, in _spawn 
LOG.exception(_(\'Instance failed to spawn\'), instance=instance)
   
  File /opt/stack/nova/nova/openstack/common/excutils.py, line 68, in 
__exit__  six.reraise(self.type_ self.value, self.tb)
  File /opt/stack/nova/nova/compute/manager.py, line 1713, in _spawn 
block_device_info)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 2241, in spawn 
block_device_info)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 3628, in 
_create_domain_and_network network_info)
  File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ self.gen.next()
  File /opt/stack/nova/nova/compute/manager.py, line 556, in 
wait_for_instance_event actual_event = event.wait() uAttributeError: 
'NoneType' object has no attribute 'wait'\n]

  Looks like there is a None event in the events map.

  When  I  launch an instance with 1 network defined I face no issues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261424] [NEW] LBaaS DB: update_member catch DBDuplicateEntry

2013-12-16 Thread Avishay Balderman
Public bug reported:

When a Member is updated the code catch a DBDuplicateEntry exception.
since we are in an update operaion we assume that the entity already exists in 
DB

See:
https://github.com/openstack/neutron/blob/master/neutron/db/loadbalancer/loadbalancer_db.py#L695

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261424

Title:
  LBaaS DB: update_member catch DBDuplicateEntry

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a Member is updated the code catch a DBDuplicateEntry exception.
  since we are in an update operaion we assume that the entity already exists 
in DB

  See:
  
https://github.com/openstack/neutron/blob/master/neutron/db/loadbalancer/loadbalancer_db.py#L695

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1166382] Re: lbaas: update_member should validate the existance of the attached group

2013-04-11 Thread Avishay Balderman
** Changed in: quantum
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to quantum.
https://bugs.launchpad.net/bugs/1166382

Title:
  lbaas: update_member should validate the existance of the attached
  group

Status in OpenStack Quantum (virtual network service):
  Invalid

Bug description:
  update_member should validate the existance of the attched Pool.
  this is particulary relevant when reparenting of the member to another Pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/quantum/+bug/1166382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1166348] Re: lbass: create_vip - need to validate uniqueness by ip+subnet+port

2013-04-11 Thread Avishay Balderman
The Port creation for the Vip will avoid the situation described above.

** Changed in: quantum
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to quantum.
https://bugs.launchpad.net/bugs/1166348

Title:
  lbass: create_vip - need to validate uniqueness by  ip+subnet+port

Status in OpenStack Quantum (virtual network service):
  Invalid

Bug description:
  creating vip on the same combination of ip + subnet + port in relation
  to an existing vip should fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/quantum/+bug/1166348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp