[Yahoo-eng-team] [Bug 1585520] Re: Only some specific router or ovs ports which SRI-OV port need to reach but now can not on the same compute

2016-07-24 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585520

Title:
  Only some specific router or ovs ports which SRI-OV port need to reach
  but now can not on the same compute

Status in neutron:
  Expired

Bug description:
  In some cases, we need to implement that the SRI-OV ports can reach
  some specific ovs ports but not all the ovs ports on the same compute
  node or the SRI-OV ports can reach the specific router internal ports,
  don't need to add all the router ports mac into fdb tables on the same
  compute node .

  I have seen some code changes and bugs about this problem before. All
  solutions are adding the mac of the all ovs ports or router internal
  ports into nic fdb tables. I think that the approach is flawed ,and it
  would take up and waste the resources of network nic registers. If
  there are a lot of ovs ports in the one compute node ,the l2 agent
  would add all the mac of ovs ports into fdb table. But because the
  number of physical nic registers are limited,some mac which we want to
  reach cann't be added into the fdb table . In this case, the SRI-OV
  port just can reach some ports which we do not wanted ,and can't reach
  the specific ports which wo hope to

  In my opinion , adding a port extensions such as
  fdb_enable、sriov_enable is more better. CURD operation is very easy
  for tenant, and does not need to know the command about linux fdb.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585520/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583511] Re: when provisioning_status of loadbalancer is error, we create listener, then provisioning_status of loadbalancer is always PENDING_UPDATE

2016-07-24 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583511

Title:
  when provisioning_status of loadbalancer is error,we create
  listener,then provisioning_status of loadbalancer is always
  PENDING_UPDATE

Status in neutron:
  Expired

Bug description:
  issue is in kilo branch

  when provisioning_status of loadbalancer is error, we create listener,
  then provisioning_status of loadbalancer is always PENDING_UPDATE
  and provisioning_status of listener is always PENDING_CREATE

  lb agent log is:
  2013-11-19 13:48:18.227 21025 ERROR oslo_messaging.rpc.dispatcher 
[req-ca5b3552-240a-461a-bd37-8467b465c096 ] Exception during message handling: 
An unknown exception occurred.
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
296, in delete_loadbalancer
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher driver 
= self._get_driver(loadbalancer.id)
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
170, in _get_driver
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher raise 
DeviceNotFoundOnAgent(loadbalancer_id=loadbalancer_id)
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher 
DeviceNotFoundOnAgent: An unknown exception occurred.
  2013-11-19 13:48:18.227 21025 TRACE oslo_messaging.rpc.dispatcher

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582585] Re: the speed of query user from ldap server is very slow

2016-07-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/328820
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=53bb53a814324234aa4b798651a616e310396221
Submitter: Jenkins
Branch:master

commit 53bb53a814324234aa4b798651a616e310396221
Author: liuhongjiang 
Date:   Mon Jun 13 08:11:16 2016 +0800

Added cache for id mapping manager

When using a identity driver without providing uuid, and using default
sql id mapping driver, if there are lots of users, then it may take
minutes to list users. Adding cache to the id mapping manager can
improve the performance.

After adding the cache, when listing 12000 users though the keystone
api, and the time is reduced from about 75 seconds to 20 seconds.

Closes-Bug: #1582585
Change-Id: I72eeb88926d8babb09a61e99f6f594371987f393


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1582585

Title:
  the speed of query user from ldap server is very slow

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In our project, the speed of query user from ldap server is very
  slow,our ldap user number is 12,000,the query costs almost 45 seconds

  The reason is that keystone will generate the uuid for the ldap users one by 
one and insert db.And second query time later,it also goes to db,not use the 
cache.
  So adding the cache to improve the query speed

  After adding @MEMOIZE to the following function
  
https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L580.
  First query time almost costs 50 seconds,but second query time later it only 
costs 7 seconds.

  So it is very necessary to improve this feature

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1582585/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1604838] Re: Failed to add sriov instance mac address when 2 physical port are configured

2016-07-24 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/345247
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=7236d9cb9a2012ddd9bd0f7ca2683795f6998fd7
Submitter: Jenkins
Branch:master

commit 7236d9cb9a2012ddd9bd0f7ca2683795f6998fd7
Author: Edan David 
Date:   Thu Jul 21 04:03:25 2016 -0400

Validate device to mac instead of port id to mac

When updating the Fdb table, validate if rule exists with
device to mac instead of port id to mac.
In case several devices are located on the same physnet each device
needs to be updated separately, therefor the validation of existing
Fdb rules should be device to mac.

Change-Id: I889cbb02a875403122d520357c38eae2af14ebbe
Closes-Bug: #1604838


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1604838

Title:
  Failed to add sriov instance mac address when 2 physical port are
  configured

Status in neutron:
  Fix Released

Bug description:
  Ping Fail between sriov-vm to pv-vm on the same physical server.

  
  from configuration file => /etc/neutron/plugins/ml2/ml2_conf.ini
  [FDB]
  shared_physical_device_mappings = default:p2p1,default:p2p2

  The Mac-Address is opened on the first port only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1604838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606064] [NEW] Default value for vol_size can not be configured

2016-07-24 Thread Alvaro Aleman
Public bug reported:

Currently it is not possible to set a default value for the instance creation 
option vol_size.
The only way to achieve this is by manually altering the Javascript: 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js#L188

This should be configurable via Launch Instance defaults:
http://docs.openstack.org/developer/horizon/topics/settings.html#launch-
instance-defaults

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1606064

Title:
  Default value for vol_size can not be configured

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently it is not possible to set a default value for the instance creation 
option vol_size.
  The only way to achieve this is by manually altering the Javascript: 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/static/dashboard/project/workflow/launch-instance/launch-instance-model.service.js#L188

  This should be configurable via Launch Instance defaults:
  http://docs.openstack.org/developer/horizon/topics/settings.html
  #launch-instance-defaults

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1606064/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606020] [NEW] UnhashableKeyWarning in logs

2016-07-24 Thread Rob Cresswell
Public bug reported:

UnhashableKeyWarning: The key ((({'client': ,
 'version': 2}, u'admin', 'b6240f0c5a36459cb14062b31293cf52', 
u'837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101:8776/v2/837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101/identity'),), ()) is not hashable and cannot be 
memoized.
WARNING:py.warnings:UnhashableKeyWarning: The key ((({'client': ,
 'version': 2}, u'admin', 'b6240f0c5a36459cb14062b31293cf52', 
u'837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101:8776/v2/837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101/identity'),), ()) is not hashable and cannot be 
memoized.


This might be related to https://review.openstack.org/#/c/314750/

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1606020

Title:
  UnhashableKeyWarning in logs

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  UnhashableKeyWarning: The key ((({'client': ,
 'version': 2}, u'admin', 'b6240f0c5a36459cb14062b31293cf52', 
u'837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101:8776/v2/837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101/identity'),), ()) is not hashable and cannot be 
memoized.
  WARNING:py.warnings:UnhashableKeyWarning: The key ((({'client': ,
 'version': 2}, u'admin', 'b6240f0c5a36459cb14062b31293cf52', 
u'837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101:8776/v2/837aed8e414a4867951d63378cdb0b81', 
u'http://192.168.56.101/identity'),), ()) is not hashable and cannot be 
memoized.

  
  This might be related to https://review.openstack.org/#/c/314750/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1606020/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605966] [NEW] L3 HA: VIP doesn't changed if qr interface or qg interface was down

2016-07-24 Thread changzhi
Public bug reported:

In L3 HA, if qr interface or qg interface was down, VIP doesn't changed.

In current keepalived configure file,

vrrp_instance VR_2 {
state BACKUP
interface ha-c00c7b49-d5
virtual_router_id 2
priority 50
garp_master_delay 60
nopreempt
advert_int 2
track_interface {
ha-c00c7b49-d5
}
virtual_ipaddress {
169.254.0.2/24 dev ha-c00c7b49-d5
}
virtual_ipaddress_excluded {
2.2.2.1/24 dev qr-b312f788-9b
fe80::f816:3eff:feac:fa12/64 dev qr-b312f788-9b scope link
}
}

Track interfaces only include "ha" interface, so VIP will not changed if
"qr" or "qg" interface was down.

To address this, we track both "qr" and "qg" interfaces, like this:

vrrp_instance VR_2 {
state BACKUP
interface ha-c00c7b49-d5
virtual_router_id 2
priority 50
garp_master_delay 60
nopreempt
advert_int 2
track_interface {
qr-xxx
qg-xxx
ha-c00c7b49-d5
}
virtual_ipaddress {
169.254.0.2/24 dev ha-c00c7b49-d5
}
virtual_ipaddress_excluded {
2.2.2.1/24 dev qr-b312f788-9b
fe80::f816:3eff:feac:fa12/64 dev qr-b312f788-9b scope link
}
}

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605966

Title:
  L3 HA: VIP doesn't changed if qr interface or qg interface was down

Status in neutron:
  New

Bug description:
  In L3 HA, if qr interface or qg interface was down, VIP doesn't
  changed.

  In current keepalived configure file,

  vrrp_instance VR_2 {
  state BACKUP
  interface ha-c00c7b49-d5
  virtual_router_id 2
  priority 50
  garp_master_delay 60
  nopreempt
  advert_int 2
  track_interface {
  ha-c00c7b49-d5
  }
  virtual_ipaddress {
  169.254.0.2/24 dev ha-c00c7b49-d5
  }
  virtual_ipaddress_excluded {
  2.2.2.1/24 dev qr-b312f788-9b
  fe80::f816:3eff:feac:fa12/64 dev qr-b312f788-9b scope link
  }
  }

  Track interfaces only include "ha" interface, so VIP will not changed
  if "qr" or "qg" interface was down.

  To address this, we track both "qr" and "qg" interfaces, like this:

  vrrp_instance VR_2 {
  state BACKUP
  interface ha-c00c7b49-d5
  virtual_router_id 2
  priority 50
  garp_master_delay 60
  nopreempt
  advert_int 2
  track_interface {
  qr-xxx
  qg-xxx
  ha-c00c7b49-d5
  }
  virtual_ipaddress {
  169.254.0.2/24 dev ha-c00c7b49-d5
  }
  virtual_ipaddress_excluded {
  2.2.2.1/24 dev qr-b312f788-9b
  fe80::f816:3eff:feac:fa12/64 dev qr-b312f788-9b scope link
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp