[Yahoo-eng-team] [Bug 1487823] [NEW] DVR+LBaasV2- Floating IP down

2015-08-22 Thread Alex Syafeyev
Public bug reported:

Description of problem:
AIO+Compute setup. LbaasV2+DVR
Floating ip is not going ip after associating it to LB vip. 

When we work with dvr the EXTERNAL (gateway IF) is configured on the
snat ns (when in non dvr it is on qrouter ns)

"
[root@puma07 ~(keystone_admin)]# ip netns
qlbaas-0e804f06-c5f7-42ce-a2a2-dceab1021f32
snat-c7f6851d-981b-4b07-9584-0803d0240b7f  <---
qrouter-c7f6851d-981b-4b07-9584-0803d0240b7f
qdhcp-e2a6b8e4-40de-47e5-9dcd-e66121c5562b
"
When we associate floating ip to LB vip it searches the external IF in the 
router NS.

associating floating ip to VM (without LB) work fine.


Version-Release number of selected component (if applicable):
python-neutron-fwaas-2015.1.0-3.el7ost.noarch
openstack-neutron-common-2015.1.0-12.el7ost.noarch
python-neutronclient-2.4.0-1.el7ost.noarch
openstack-neutron-2015.1.0-12.el7ost.noarch
python-neutron-lbaas-2015.1.0-5.el7ost.noarch
openstack-neutron-ml2-2015.1.0-12.el7ost.noarch
openstack-neutron-openvswitch-2015.1.0-12.el7ost.noarch
openstack-neutron-lbaas-2015.1.0-5.el7ost.noarch
python-neutron-2015.1.0-12.el7ost.noarch
openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch


How reproducible:


Steps to Reproduce:
1.configure DVR setup AIO+compute
2. Enable lbaasV2. create load-balacner. 
3.Create FloatingIP and assign it to the LB vip port. 

Actual results:
Floating IP is down due to DVR configuration ( gateway IF is on snat Ns and not 
on router NS)

Expected results:
Floating IP should be up

I think we should do 1 of the following: 
1. When lbaas configured, the "external IF/port" should be on qrouter ns even 
if it is a DVR environmet- this means no snat or fip namespaces. If additional 
VMs will be created and assigned with another FIPs then we should use the snat 
and fip namespaces. 

or

2. We should use lbaas agent exactly as we are using l3 agent in DVR
environment


logs :
2015-08-19 12:16:00.330 6391 INFO neutron.wsgi 
[req-beec6448-208f-4f71-88c8-c8ace78dddef ] 10.35.160.23 - - [19/Aug/2015 
12:16:00] "PUT /v2.0/floatingips/8087cafc-595a-486a-8e12-7016793cfa70.json 
HTTP/1.1" 200 584 0.484856
2015-08-19 12:16:00.833 6300 DEBUG neutron.db.l3_dvr_db 
[req-74e2674c-f5d2-40b8-9187-415daba4bb1b ] FIP Agent : 
00fe9d29-5c78-4d82-b018-669b2d3fe622  _process_floating_ips_dvr 
/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py:415
2015-08-19 12:16:00.837 6300 DEBUG neutron.db.l3_dvr_db 
[req-d975570b-f66d-4825-a9dd-f019ed4fa729 ] FIP Agent : 
9eb0dae2-90e9-46b5-afab-d4f5c473e74b  _process_floating_ips_dvr 
/usr/lib/python2.7/site-packages/neutron/db/l3_dvr_db.py:415
2015-08-19 12:16:01.144 6315 DEBUG neutron.agent.l3.router_info [-] No 
Interface for floating IPs router: c7f6851d-981b-4b07-9584-0803d0240b7f 
process_floating_ip_addresses 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py:229
2015-08-19 12:16:01.144 6315 DEBUG neutron.agent.l3.agent [-] Sending floating 
ip statuses: {} update_fip_statuses 
/usr/lib/python2.7/site-packages/neutron/agent/l3/agent.py:340

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487823

Title:
  DVR+LBaasV2- Floating IP down

Status in neutron:
  New

Bug description:
  Description of problem:
  AIO+Compute setup. LbaasV2+DVR
  Floating ip is not going ip after associating it to LB vip. 

  When we work with dvr the EXTERNAL (gateway IF) is configured on the
  snat ns (when in non dvr it is on qrouter ns)

  "
  [root@puma07 ~(keystone_admin)]# ip netns
  qlbaas-0e804f06-c5f7-42ce-a2a2-dceab1021f32
  snat-c7f6851d-981b-4b07-9584-0803d0240b7f  <---
  qrouter-c7f6851d-981b-4b07-9584-0803d0240b7f
  qdhcp-e2a6b8e4-40de-47e5-9dcd-e66121c5562b
  "
  When we associate floating ip to LB vip it searches the external IF in the 
router NS.

  associating floating ip to VM (without LB) work fine.

  
  Version-Release number of selected component (if applicable):
  python-neutron-fwaas-2015.1.0-3.el7ost.noarch
  openstack-neutron-common-2015.1.0-12.el7ost.noarch
  python-neutronclient-2.4.0-1.el7ost.noarch
  openstack-neutron-2015.1.0-12.el7ost.noarch
  python-neutron-lbaas-2015.1.0-5.el7ost.noarch
  openstack-neutron-ml2-2015.1.0-12.el7ost.noarch
  openstack-neutron-openvswitch-2015.1.0-12.el7ost.noarch
  openstack-neutron-lbaas-2015.1.0-5.el7ost.noarch
  python-neutron-2015.1.0-12.el7ost.noarch
  openstack-neutron-fwaas-2015.1.0-3.el7ost.noarch


  How reproducible:

  
  Steps to Reproduce:
  1.configure DVR setup AIO+compute
  2. Enable lbaasV2. create load-balacner. 
  3.Create FloatingIP and assign it to the LB vip port. 

  Actual results:
  Floating IP is down due to DVR configuration ( gateway IF is on snat Ns and 
not on router NS)

  Expected results:
  Floating IP should be up

  I think we should do 1 of the following: 
  1. Wh

[Yahoo-eng-team] [Bug 1483322] Re: python-memcached get_multi has much faster than get when get multiple value

2015-08-22 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.cache
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1483322

Title:
  python-memcached get_multi has much faster than get when get multiple
  value

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.cache:
  New

Bug description:
  nova use memcache with python.memcached's get function.

  when multiple litem reterived it uses as for .. in .. loop..
  in this case get_multi has better performance.

  In my case,  here is test result

  get 2.3020670414
  get_multi 0.0353858470917

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1483322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1410706] Re: Nova-compute can't start after DB downgrade from juno to havana

2015-08-22 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1410706

Title:
  Nova-compute can't start after DB downgrade from juno to havana

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  We downgrade our OpenStack deployment from Juno to Havana, nova-
  compute process can't start because of DB schema issue, the
  'compute_node_stats' table was created when DB downgrade, see details
  in 233_add_stats_in_compute_nodes.py, but it didn't set the 'id'
  colume as primary key, this would result in nova-compute booting
  failed.

  raise result RemoteError: Remote error: DBError (IntegrityError) null value 
in column "ID" violates not-null constraint DETAIL:  
  Failing row contains (2014-12-29 07:37:18.211288, null, null, 0, null, 
num_task_None, 1, 1).  'INSERT INTO compute_node_stats (created_at, updated_at, 
deleted_at, deleted, key, value, compute_node_id) VALUES (%(created_at)s, 
%(updated_at)s, %(deleted_at)s, %(deleted)s, %(key)s, %(value)s, 
%(compute_node_id)s) RETURNING compute_node_stats.id' {'deleted': 0, 
'created_at': datetime.datetime(2014, 12, 29, 7, 37, 18, 211288), 'updated_at': 
None, 'value': 1, 'compute_node_id': 1, 'key': u'num_task_None', 'deleted_at': 
None}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1410706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456007] Re: The function _delete_mpath has no valid parameter

2015-08-22 Thread Davanum Srinivas (DIMS)
There's no _delete_mpath anymore since we moved to os-brick

** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456007

Title:
  The function _delete_mpath has no valid parameter

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  the function /nova/virt/libvirt/volume.py  def _delete_mpath(self,
  iscsi_properties, multipath_device, ips_iqns):   the multipath_device
  parameter has  been not  used .so the parameter "multipath_device" no
  need. we should remove the parameter "multipath_device".

  This problem also exists in the K version.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1456007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487742] [NEW] Nova passing bad 'size' property value 'None' to Glance for image metadata

2015-08-22 Thread Mike Dorman
Public bug reported:

Glance does not accept 'None' as a valid value for the 'size' property
[1].  However, in certain situations Nova is sending a 'size' property
with a 'None' value.  This results in a 400 response from Glance to
Nova, and the following backtrace in Glance:

2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images Traceback (most recent 
call last):
2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images   File 
"/usr/lib/python2.7/site-packages/glance/api/v1/images.py", line 1144, in 
_deserialize
2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images 
result['image_meta'] = utils.get_image_meta_from_headers(request)
2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images   File 
"/usr/lib/python2.7/site-packages/glance/common/utils.py", line 322, in 
get_image_meta_from_headers
2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images extra_msg=extra)
2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images InvalidParameterValue: 
Invalid value 'None' for parameter 'size': Cannot convert image size 'None' to 
an integer.

I believe what's happening is Nova tries to enforce certain required
properties when creating or updating an image, and in the process
reconciling those with the properties that Glance already has (through
the _translate_from_glance() [2] and _extract_attributes() [3] methods
in nova/image/glance.py)

Nova is enforcing the 'size' property being in place [4], but if Glance
does not already have a 'size' property on the image (like if the image
has been queued but not uploaded yet), the value gets set to 'None' on
the Nova side [5].  This gets sent to Glance in subsequent calls, and it
fails because 'None' cannot be converted to an integer (see backtrace
above.)


Steps to Reproduce:

Nova and Glance 2015.1.1

1.  Queue a new image in Glance
2.  Attempt to set a metadata attribute on that image (this will fail with 400 
error from Glance)
3.  Actually upload the image data sometime later


Potential Solution:

I've patched this locally to simply check that the 'size' property gets
set to 0 instead of 'None' on the Nova side.  I am not familiar enough
with all the internals here to understand if that's the "right"
solution, but I can confirm it's working for us and this bug is no
longer triggered.


[1] 
https://github.com/openstack/glance/blob/2015.1.1/glance/common/utils.py#L305-L319
[2] https://github.com/openstack/nova/blob/2015.1.1/nova/image/glance.py#L482
[3] https://github.com/openstack/nova/blob/2015.1.1/nova/image/glance.py#L533
[4] https://github.com/openstack/nova/blob/2015.1.1/nova/image/glance.py#L539
[5] https://github.com/openstack/nova/blob/2015.1.1/nova/image/glance.py#L571

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487742

Title:
  Nova passing bad 'size' property value 'None' to Glance for image
  metadata

Status in OpenStack Compute (nova):
  New

Bug description:
  Glance does not accept 'None' as a valid value for the 'size' property
  [1].  However, in certain situations Nova is sending a 'size' property
  with a 'None' value.  This results in a 400 response from Glance to
  Nova, and the following backtrace in Glance:

  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images Traceback (most 
recent call last):
  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images   File 
"/usr/lib/python2.7/site-packages/glance/api/v1/images.py", line 1144, in 
_deserialize
  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images 
result['image_meta'] = utils.get_image_meta_from_headers(request)
  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images   File 
"/usr/lib/python2.7/site-packages/glance/common/utils.py", line 322, in 
get_image_meta_from_headers
  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images extra_msg=extra)
  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images 
InvalidParameterValue: Invalid value 'None' for parameter 'size': Cannot 
convert image size 'None' to an integer.

  I believe what's happening is Nova tries to enforce certain required
  properties when creating or updating an image, and in the process
  reconciling those with the properties that Glance already has (through
  the _translate_from_glance() [2] and _extract_attributes() [3] methods
  in nova/image/glance.py)

  Nova is enforcing the 'size' property being in place [4], but if
  Glance does not already have a 'size' property on the image (like if
  the image has been queued but not uploaded yet), the value gets set to
  'None' on the Nova side [5].  This gets sent to Glance in subsequent
  calls, and it fails because 'None' cannot be converted to an integer
  (see backtrace above.)

  
  Steps to Reproduce:

  Nova and Glance 2015.1.1

  1.  Queue a new image in Glance
  2.  Attempt to set a metadata attribute on that image (this will fa

[Yahoo-eng-team] [Bug 1431215] Re: AttributeError: 'Instance' object has no attribute 'get_flavor'

2015-08-22 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431215

Title:
  AttributeError: 'Instance' object has no attribute 'get_flavor'

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  see "AttributeError: 'Instance' object has no attribute 'get_flavor'",
  when call API.update() of nova/compute/api.py

  2015-03-12 15:25:36.327 24722 TRACE nova.notifications [instance: 
50328180-5fa1-ac4d-633c-b278cce952a9] Traceback (most recent call last):
  2015-03-12 15:25:36.327 24722 TRACE nova.notifications [instance: 
50328180-5fa1-ac4d-633c-b278cce952a9]   File 
"/usr/lib/python2.7/site-packages/nova/notifications.py", line 187, in 
send_update_with_states
  2015-03-12 15:25:36.327 24722 TRACE nova.notifications [instance: 
50328180-5fa1-ac4d-633c-b278cce952a9] service=service, host=host)
  2015-03-12 15:25:36.327 24722 TRACE nova.notifications [instance: 
50328180-5fa1-ac4d-633c-b278cce952a9]   File 
"/usr/lib/python2.7/site-packages/nova/notifications.py", line 228, in 
_send_instance_update_notification
  2015-03-12 15:25:36.327 24722 TRACE nova.notifications [instance: 
50328180-5fa1-ac4d-633c-b278cce952a9] payload = info_from_instance(context, 
instance, None, None)
  2015-03-12 15:25:36.327 24722 TRACE nova.notifications [instance: 
50328180-5fa1-ac4d-633c-b278cce952a9]   File 
"/usr/lib/python2.7/site-packages/nova/notifications.py", line 371, in 
info_from_instance
  2015-03-12 15:25:36.327 24722 TRACE nova.notifications [instance: 
50328180-5fa1-ac4d-633c-b278cce952a9] instance_type = instance.get_flavor()
  2015-03-12 15:25:36.327 24722 TRACE nova.notifications [instance: 
50328180-5fa1-ac4d-633c-b278cce952a9] AttributeError: 'Instance' object has no 
attribute 'get_flavor'
  2015-03-12 15:25:36.327 24722 TRACE nova.notifications [instance: 
50328180-5fa1-ac4d-633c-b278cce952a9]
  2015-03-12 15:25:42.934 24722 ERROR nova.notifications [-] [instance: 
50334235-a76c-8658-4125-24f522b9c1d7] Failed to send state update notification

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1431215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1486745] Re: Horizon project list broken by pki/pkiz tokens

2015-08-22 Thread Matthias Runge
*** This bug is a duplicate of bug 1484499 ***
https://bugs.launchpad.net/bugs/1484499

** No longer affects: horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1486745

Title:
  Horizon project list broken by pki/pkiz tokens

Status in django-openstack-auth:
  Fix Committed

Bug description:
  When logging into Horizon with a Keystone that uses pki or pkiz tokens
  the project list in the pull down is empty. The root cause is the
  constructor for openstack_auth.user.Token is reusing the hasher object
  that already has content from hashing the token_id. This results in a
  403 error when trying to load the project list with the unscoped token
  because the hashed value does not match the id in Keystone.

  If the hasher is recreated before hashing the unscoped token, it will
  match the token id in Keystone and all is good.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1486745/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487671] Re: ldap and ldappool packages are not mentioned in requirements.txt

2015-08-22 Thread Dolph Mathews
See my comment on a related bug:
https://bugs.launchpad.net/keystone/+bug/1487728/comments/2

** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1487671

Title:
  ldap and ldappool packages are not mentioned in requirements.txt

Status in Keystone:
  Invalid

Bug description:
  The ldap backend driver depends on ldap and ldappool packages and wont
  work unless the packages are explicitly installed.

  How to reproduce
  1) Create virtual env (mkvirtualenv keystone-ldap)
  2) Check out git repo (git clone https://github.com/openstack/keystone -b 
stable/juno) 
  3)  pip install -r requirements.txt
  4)[GCC 4.6.3] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import ldap
  Traceback (most recent call last):
File "", line 1, in 
  ImportError: No module named ldap
  >>>

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1487671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487728] Re: ldap and ldappool modules are no listed in requirements file

2015-08-22 Thread Dolph Mathews
LDAP dependencies are optional and are defined here:

  https://github.com/openstack/keystone/blob/master/setup.cfg#L25-L27

This takes advantage of setuptools extras:

  https://pythonhosted.org/setuptools/setuptools.html#declaring-extras-
optional-features-with-their-own-dependencies

Use the following pattern to install extra dependencies:

  $ pip install keystone[ldap]

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1487728

Title:
  ldap and ldappool modules are no listed in requirements file

Status in Keystone:
  Invalid

Bug description:
  Configuring LDAP as assignment backend driver is causing some import
  modules errors.  After reviewing the requirements.txt file, those
  dependencies are not listed there, but those dependencies are listed
  in requirements repo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1487728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487731] [NEW] Improvement for show error messages on admin create flavor

2015-08-22 Thread qiaomin032
Public bug reported:

When input a flavor name or flavor ID that already exist, we will see
the error message on the top of the workflow modal. If the error message
beside the input box, it will be more better.

** Affects: horizon
 Importance: Undecided
 Assignee: qiaomin032 (chen-qiaomin)
 Status: In Progress

** Attachment added: "flavor_repeat.jpg"
   
https://bugs.launchpad.net/bugs/1487731/+attachment/4450823/+files/flavor_repeat.jpg

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487731

Title:
  Improvement for show error messages on admin create flavor

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When input a flavor name or flavor ID that already exist, we will see
  the error message on the top of the workflow modal. If the error
  message beside the input box, it will be more better.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487271] Re: success_url miss 'reverse_lazy' in admin volume type panel

2015-08-22 Thread qiaomin032
** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1487271

Title:
  success_url miss 'reverse_lazy' in admin volume type panel

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In the admin volume type panel,  several success_url in views miss 
'reverse_lazy', this result in the cancel action  redirect to wrong url.  For 
example, when right click the "Create Volume Type" button and open a new page, 
click the Cancel button will cast error.
  See the attachment for more detail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1487271/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487728] [NEW] ldap and ldappool modules are no listed in requirements file

2015-08-22 Thread Victor Morales
Public bug reported:

Configuring LDAP as assignment backend driver is causing some import
modules errors.  After reviewing the requirements.txt file, those
dependencies are not listed there, but those dependencies are listed in
requirements repo.

** Affects: keystone
 Importance: Undecided
 Assignee: Victor Morales (electrocucaracha)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Victor Morales (electrocucaracha)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1487728

Title:
  ldap and ldappool modules are no listed in requirements file

Status in Keystone:
  New

Bug description:
  Configuring LDAP as assignment backend driver is causing some import
  modules errors.  After reviewing the requirements.txt file, those
  dependencies are not listed there, but those dependencies are listed
  in requirements repo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1487728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487671] [NEW] ldap and ldappool packages are not mentioned in requirements.txt

2015-08-22 Thread Mahesh Sawaiker
Public bug reported:

The ldap backend driver depends on ldap and ldappool packages and wont
work unless the packages are explicitly installed.

How to reproduce
1) Create virtual env (mkvirtualenv keystone-ldap)
2) Check out git repo (git clone https://github.com/openstack/keystone -b 
stable/juno) 
3)  pip install -r requirements.txt
4)[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import ldap
Traceback (most recent call last):
  File "", line 1, in 
ImportError: No module named ldap
>>>

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1487671

Title:
  ldap and ldappool packages are not mentioned in requirements.txt

Status in Keystone:
  New

Bug description:
  The ldap backend driver depends on ldap and ldappool packages and wont
  work unless the packages are explicitly installed.

  How to reproduce
  1) Create virtual env (mkvirtualenv keystone-ldap)
  2) Check out git repo (git clone https://github.com/openstack/keystone -b 
stable/juno) 
  3)  pip install -r requirements.txt
  4)[GCC 4.6.3] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import ldap
  Traceback (most recent call last):
File "", line 1, in 
  ImportError: No module named ldap
  >>>

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1487671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp