[Yahoo-eng-team] [Bug 1347376] [NEW] create a volume from a image raise AttributeError

2014-07-22 Thread Zhang Hao
Public bug reported:

I'm working under RHEL7 + ICEHOUSE 2014.1
For glance, image is stored in local file system.

I want to create a volume from an image.
So I run command "cinder create --image-id ${image_id} --display-name ${name} 
${size}"
But then, I get error "glanceclient AttributeError: container_format" in ciner 
api debug info.

log_http_response 
/usr/lib/python2.7/site-packages/glanceclient/common/http.py:152
2014-07-22 17:25:32.310 15514 ERROR cinder.api.middleware.fault 
[req-0a2334ec-11a0-4356-89b4-492a2a59b302 fafa6d26b7644056bcdd920f2a667785 
c78d023501b34820b5bcd9e2db85a2bf - - -] Caught error: container_format
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault Traceback (most 
recent call last):
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/cinder/api/middleware/fault.py", line 75, in 
__call__
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault return 
req.get_response(self.application)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault 
application, catch_exc_info=False)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in 
call_application
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault app_iter = 
application(self.environ, start_response)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/keystoneclient/middleware/auth_token.py", 
line 615, in __call__
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault return 
self.app(env, start_response)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/routes/middleware.py", line 131, in __call__
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault response = 
self.app(environ, start_response)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault return 
resp(environ, start_response)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault resp = 
self.call_func(req, *args, **self.kwargs)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault return 
self.func(req, *args, **kwargs)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py", line 895, in 
__call__
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault 
content_type, body, accept)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py", line 943, in 
_process_stack
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault 
action_result = self.dispatch(meth, request, action_args)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/cinder/api/openstack/wsgi.py", line 1019, in 
dispatch
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault return 
method(req=request, **action_args)
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/cinder/api/v1/volumes.py", line 432, in create
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault new_volume 
= self.volume_api.create(context,
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/python2.7/site-packages/cinder/volume/api.py", line 189, in create
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault 
flow_engine.run()
2014-07-22 17:25:32.310 15514 TRACE cinder.api.middleware.fault   File 
"/usr/lib/pytho

[Yahoo-eng-team] [Bug 1347361] [NEW] There should be a naming convention for neutron DB tables

2014-07-22 Thread Henry Gessau
Public bug reported:

Now that the database is healed and all tables are present, the names of
tables are haphazard and there is no convention for avoiding naming
conflicts or having sensible grouping.

A naming convention may look something like:

__

Existing tables should be renamed to follow the convention.

A README file explaining the convention should be created in the
directory with the models.

** Affects: neutron
 Importance: Undecided
 Assignee: Henry Gessau (gessau)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Henry Gessau (gessau)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1347361

Title:
  There should be a naming convention for neutron DB tables

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Now that the database is healed and all tables are present, the names
  of tables are haphazard and there is no convention for avoiding naming
  conflicts or having sensible grouping.

  A naming convention may look something like:

  __

  Existing tables should be renamed to follow the convention.

  A README file explaining the convention should be created in the
  directory with the models.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1347361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347355] [NEW] Extra image metadata didn't assgin to volume based instance snapshot image

2014-07-22 Thread Alex Xu
Public bug reported:

curl -i
'http://cloudcontroller:8774/v2/fdbb1e8f23eb40c89f3a677e2621b95c/servers/e2461ba7-3624-4d43-a456-acb87a0fb6f9/action'
-X POST -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient"
-H "Content-Type: application/json" -H "Accept: application/json" -H "X
-Auth-Token:
MIITCgYJKoZIhvcNAQcCoIIS+zCCEvcCAQExDTALBglghkgBZQMEAgEwghFYBgkqhkiG9w0BBwGgghFJBIIRRXsiYWNjZXNzIjogeyJ0b2tlbiI6IHsiaXNzdWVkX2F0IjogIjIwMTQtMDctMjNUMDM6MTU6MDQuMzk5MTgyIiwgImV4cGlyZXMiOiAiMjAxNC0wNy0yM1QwNDoxNTowNFoiLCAiaWQiOiAicGxhY2Vob2xkZXIiLCAidGVuYW50IjogeyJkZXNjcmlwdGlvbiI6IG51bGwsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogImZkYmIxZThmMjNlYjQwYzg5ZjNhNjc3ZTI2MjFiOTVjIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9jbG91ZGNvbnRyb2xsZXI6ODc3NC92Mi9mZGJiMWU4ZjIzZWI0MGM4OWYzYTY3N2UyNjIxYjk1YyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9jbG91ZGNvbnRyb2xsZXI6ODc3NC92Mi9mZGJiMWU4ZjIzZWI0MGM4OWYzYTY3N2UyNjIxYjk1YyIsICJpZCI6ICI1MmQ3NDBkZGJmODc0YWExYmJmNGVmZjU1ZjcyOTlmYSIsICJwdWJsaWNVUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcjo4Nzc0L3YyL2ZkYmIxZThmMjNlYjQwYzg5ZjNhNjc3ZTI2MjFiOTVjIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImNvbXB1dGUiLCAibmFtZSI6ICJub3ZhIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcj
 
o5Njk2LyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9jbG91ZGNvbnRyb2xsZXI6OTY5Ni8iLCAiaWQiOiAiNTM1YTAyODRiYTk5NDNiMDg4ZWUxNWNlZjkzODRkNjAiLCAicHVibGljVVJMIjogImh0dHA6Ly9jbG91ZGNvbnRyb2xsZXI6OTY5Ni8ifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAibmV0d29yayIsICJuYW1lIjogIm5ldXRyb24ifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vY2xvdWRjb250cm9sbGVyOjg3NzYvdjIvZmRiYjFlOGYyM2ViNDBjODlmM2E2NzdlMjYyMWI5NWMiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vY2xvdWRjb250cm9sbGVyOjg3NzYvdjIvZmRiYjFlOGYyM2ViNDBjODlmM2E2NzdlMjYyMWI5NWMiLCAiaWQiOiAiMjJiMGVlMzg3MGQ1NGJhODhiZjgzMWVkZDNjMTc3ZjciLCAicHVibGljVVJMIjogImh0dHA6Ly9jbG91ZGNvbnRyb2xsZXI6ODc3Ni92Mi9mZGJiMWU4ZjIzZWI0MGM4OWYzYTY3N2UyNjIxYjk1YyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJ2b2x1bWV2MiIsICJuYW1lIjogImNpbmRlcnYyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcjo4Nzc0L3YzIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcjo4Nzc0L3Y
 
zIiwgImlkIjogIjFhNzM5MWViNmUwZjQ4ZGJiNWQ1MjNiZDg4OTUxZDk1IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vY2xvdWRjb250cm9sbGVyOjg3NzQvdjMifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZXYzIiwgIm5hbWUiOiAibm92YXYzIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcjozMzMzIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcjozMzMzIiwgImlkIjogIjJkMWJjZWEwYjBmYzQ0Y2ZhNTc3ZWNlMGM2NGIwMDQxIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vY2xvdWRjb250cm9sbGVyOjMzMzMifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiczMiLCAibmFtZSI6ICJzMyJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9jbG91ZGNvbnRyb2xsZXI6OTI5MiIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9jbG91ZGNvbnRyb2xsZXI6OTI5MiIsICJpZCI6ICIzYWJlNGFmY2JmMjM0ZDMxOGZmZmM0NjgxNWE0NmMxNSIsICJwdWJsaWNVUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcjo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL2Nsb3VkY29udHJv
 
bGxlcjo4Nzc3LyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9jbG91ZGNvbnRyb2xsZXI6ODc3Ny8iLCAiaWQiOiAiMzU5NDJlYTdiZDIyNDA2NWE5MTdjYmEwZmZlNGEwNDYiLCAicHVibGljVVJMIjogImh0dHA6Ly9jbG91ZGNvbnRyb2xsZXI6ODc3Ny8ifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAibWV0ZXJpbmciLCAibmFtZSI6ICJjZWlsb21ldGVyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguMS4xNTk6ODAwMC92MSIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjEuMTU5OjgwMDAvdjEiLCAiaWQiOiAiMGI2YmI1NGNkNzQzNGY5NGE0MzdiOTk0MTdmZWU5OWEiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjEuMTU5OjgwMDAvdjEifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY2xvdWRmb3JtYXRpb24iLCAibmFtZSI6ICJoZWF0In0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcjo4Nzc2L3YxL2ZkYmIxZThmMjNlYjQwYzg5ZjNhNjc3ZTI2MjFiOTVjIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcjo4Nzc2L3YxL2ZkYmIxZThmMjNlYjQwYzg5ZjNhNjc3ZTI2MjFiOTVjIiwgImlkIjogIjljZjIyZDA1Y2MyYTQ3OGY5M
 
TIwMzExY2Q4YTNhNDEyIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vY2xvdWRjb250cm9sbGVyOjg3NzYvdjEvZmRiYjFlOGYyM2ViNDBjODlmM2E2NzdlMjYyMWI5NWMifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAidm9sdW1lIiwgIm5hbWUiOiAiY2luZGVyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcjo4NzczL3NlcnZpY2VzL0FkbWluIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL2Nsb3VkY29udHJvbGxlcjo4NzczL3NlcnZpY2VzL0Nsb3VkIiwgImlkIjogIjFmMzZiY2E3ZDA4ZDRmYzZhZjExMDZjZGExYzNiZGE4IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vY2xvdWRjb250cm9sbGVyOjg3NzMvc2VydmljZXMvQ2xvdWQifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiZWMyIiwgIm5hbWUiOiAiZWMyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguM

[Yahoo-eng-team] [Bug 1347354] [NEW] User keystone policy to check if user password can be updated

2014-07-22 Thread Lin Hua Cheng
Public bug reported:


Keystone have a policy for change password, we need to apply this to
horizon too.


policy:
  "identity:change_password": "rule:admin_or_owner",

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347354

Title:
  User keystone policy to check if user password can be updated

Status in OpenStack Dashboard (Horizon):
  New

Bug description:

  Keystone have a policy for change password, we need to apply this to
  horizon too.

  
  policy:
"identity:change_password": "rule:admin_or_owner",

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347346] [NEW] openstack.common needs to be synced

2014-07-22 Thread Lin Hua Cheng
Public bug reported:


It has been awhile sync last sync, time to update to get the latest bug fix

** Affects: horizon
 Importance: Undecided
 Assignee: Lin Hua Cheng (lin-hua-cheng)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Lin Hua Cheng (lin-hua-cheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347346

Title:
  openstack.common needs to be synced

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  It has been awhile sync last sync, time to update to get the latest bug fix

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347348] [NEW] Update keystone policy file from keystone

2014-07-22 Thread Lin Hua Cheng
Public bug reported:


Keystone policy has been updated to use the new policy language. We could also 
import the v3 policy file which is backward compatible with v2.

** Affects: horizon
 Importance: Undecided
 Assignee: Lin Hua Cheng (lin-hua-cheng)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Lin Hua Cheng (lin-hua-cheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347348

Title:
  Update keystone policy file from keystone

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  Keystone policy has been updated to use the new policy language. We could 
also import the v3 policy file which is backward compatible with v2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347309] Re: if multidomain enabled, login error message should change

2014-07-22 Thread David Lyle
The form that displays this error is actually in django_openstack_auth.

** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

** Changed in: django-openstack-auth
   Status: New => Confirmed

** Changed in: django-openstack-auth
   Importance: Undecided => Low

** Changed in: django-openstack-auth
 Assignee: (unassigned) => David Lyle (david-lyle)

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347309

Title:
  if multidomain enabled, login error message should change

Status in Django OpenStack Auth:
  Confirmed
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  To test this, update your local_settings.py to:

  OPENSTACK_API_VERSIONS = {
  “identity”: 3
  }

  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
  #OPENSTACK_KEYSTONE_URL = “http://:5000/v2.0″ % OPENSTACK_HOST
  OPENSTACK_KEYSTONE_URL = “http://:5000/v3″ % OPENSTACK_HOST

  After you make these changes you will see an addition 'Domain' field
  on the login screen.

  However, when you enter in the domain name wrong, you are presented
  with message: 'Invalid user name or password.'  It should be updated
  to say: 'Invalid domain, user name or password.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1347309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332855] Re: grenade test fails due to tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]

2014-07-22 Thread David Lyle
** Changed in: horizon
   Status: Confirmed => Fix Released

** Changed in: horizon
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1332855

Title:
  grenade test fails due to
  
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]

Status in Django OpenStack Auth:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Committed

Bug description:
  Grenade dsvm-jobs are failing.  Console output doesn't offer much, but
  looking at the grenade summary logs the culprit seems to be a
  dashboard ops test:

  http://logs.openstack.org/52/101252/1/check/check-grenade-
  dsvm/25e55c2/logs/grenade.sh.log.2014-06-21-153223

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1332855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347318] [NEW] tempest.api.identity.admin.v3.test_tokens.TokensV3TestJSON.test_rescope_token race fails with DB2

2014-07-22 Thread Matt Riedemann
Public bug reported:

This is against master level (juno) code with DB2 10.5, sqlalchemy-
migrate 0.9.1 and sqlalchemy 0.8.4 on RHEL 6.5, seeing race fails with
the
tempest.api.identity.admin.v3.test_tokens.TokensV3TestJSON.test_rescope_token
(and xml) tests like this:

Traceback (most recent call last):\n  File
"/tmp/tempest/tempest/tempest/api/identity/admin/v3/test_tokens.py",
line 145, in test_rescope_token\ndomain=\'Default\')\n  File
"/tmp/tempest/tempest/tempest/services/identity/v3/json/identity_client.py",
line 579, in auth\nresp, body = self.post(self.auth_url,
body=body)\n  File "/tmp/tempest/tempest/tempest/common/rest_client.py",
line 218, in post\nreturn self.request(\'POST\', url, extra_headers,
headers, body)\n  File
"/tmp/tempest/tempest/tempest/services/identity/v3/json/identity_client.py",
line 605, in request\n\'Unexpected status code
{0}\'.format(resp.status))\nIdentityError: Got identity error\nDetails:
Unexpected status code 404

The test is fine with MySQL, it looks like an issue with the expired_at
timestamp:

A breakpoint in keystone/contrib/revoke/model.py, is_revoked():

mysql:

(Pdb) p self.revoke_map
{'trust_id=*': {'consumer_id=*': {'access_token_id=*': {'expires_at=2014-07-22 
22:55:53': {'domain_id=*': {'project_id=*': 
{u'user_id=949c28307de74cafb4ab07c6ada75d6c': {'role_id=*': {'issued_before': 
datetime.datetime(2014, 7, 22, 21, 55, 59, 610579)}

DB2:

(Pdb) p self.revoke_map
{'trust_id=*': {'consumer_id=*': {'access_token_id=*': {'expires_at=2014-07-22 
22:58:44.322976': {'domain_id=*': {'project_id=*': 
{u'user_id=c4ed3fa9ee5f4e02b580389400a817e0': {'role_id=*': {'issued_before': 
datetime.datetime(2014, 7, 22, 21, 58, 49, 390556)}

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: db2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1347318

Title:
  tempest.api.identity.admin.v3.test_tokens.TokensV3TestJSON.test_rescope_token
  race fails with DB2

Status in OpenStack Identity (Keystone):
  New

Bug description:
  This is against master level (juno) code with DB2 10.5, sqlalchemy-
  migrate 0.9.1 and sqlalchemy 0.8.4 on RHEL 6.5, seeing race fails with
  the
  tempest.api.identity.admin.v3.test_tokens.TokensV3TestJSON.test_rescope_token
  (and xml) tests like this:

  Traceback (most recent call last):\n  File
  "/tmp/tempest/tempest/tempest/api/identity/admin/v3/test_tokens.py",
  line 145, in test_rescope_token\ndomain=\'Default\')\n  File
  "/tmp/tempest/tempest/tempest/services/identity/v3/json/identity_client.py",
  line 579, in auth\nresp, body = self.post(self.auth_url,
  body=body)\n  File
  "/tmp/tempest/tempest/tempest/common/rest_client.py", line 218, in
  post\nreturn self.request(\'POST\', url, extra_headers, headers,
  body)\n  File
  "/tmp/tempest/tempest/tempest/services/identity/v3/json/identity_client.py",
  line 605, in request\n\'Unexpected status code
  {0}\'.format(resp.status))\nIdentityError: Got identity
  error\nDetails: Unexpected status code 404

  The test is fine with MySQL, it looks like an issue with the
  expired_at timestamp:

  A breakpoint in keystone/contrib/revoke/model.py, is_revoked():

  mysql:

  (Pdb) p self.revoke_map
  {'trust_id=*': {'consumer_id=*': {'access_token_id=*': 
{'expires_at=2014-07-22 22:55:53': {'domain_id=*': {'project_id=*': 
{u'user_id=949c28307de74cafb4ab07c6ada75d6c': {'role_id=*': {'issued_before': 
datetime.datetime(2014, 7, 22, 21, 55, 59, 610579)}

  DB2:

  (Pdb) p self.revoke_map
  {'trust_id=*': {'consumer_id=*': {'access_token_id=*': 
{'expires_at=2014-07-22 22:58:44.322976': {'domain_id=*': {'project_id=*': 
{u'user_id=c4ed3fa9ee5f4e02b580389400a817e0': {'role_id=*': {'issued_before': 
datetime.datetime(2014, 7, 22, 21, 58, 49, 390556)}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1347318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347309] [NEW] if multidomain enabled, login error message should change

2014-07-22 Thread Cindy Lu
Public bug reported:

To test this, update your local_settings.py to:

OPENSTACK_API_VERSIONS = {
“identity”: 3
}

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
#OPENSTACK_KEYSTONE_URL = “http://:5000/v2.0″ % OPENSTACK_HOST
OPENSTACK_KEYSTONE_URL = “http://:5000/v3″ % OPENSTACK_HOST

After you make these changes you will see an addition 'Domain' field on
the login screen.

However, when you enter in the domain name wrong, you are presented with
message: 'Invalid user name or password.'  It should be updated to say:
'Invalid domain, user name or password.'

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347309

Title:
  if multidomain enabled, login error message should change

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  To test this, update your local_settings.py to:

  OPENSTACK_API_VERSIONS = {
  “identity”: 3
  }

  OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
  #OPENSTACK_KEYSTONE_URL = “http://:5000/v2.0″ % OPENSTACK_HOST
  OPENSTACK_KEYSTONE_URL = “http://:5000/v3″ % OPENSTACK_HOST

  After you make these changes you will see an addition 'Domain' field
  on the login screen.

  However, when you enter in the domain name wrong, you are presented
  with message: 'Invalid user name or password.'  It should be updated
  to say: 'Invalid domain, user name or password.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1069917] Re: Setting InnoDB for tables breaks mysqlcluster/ndb replication

2014-07-22 Thread Jay Pipes
** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1069917

Title:
  Setting InnoDB for tables breaks mysqlcluster/ndb replication

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Updating towards folsom breaks mysqlcluster/ndb replication because
  tables are created/altered to use InnoDB.

  Neither MySQL replication or drbd/corosync are solutions, as the
  production cluster(s) span multiple node groups and datacenters and
  relies heavily on automated monitoring and orchestration.

  This is a major bug as far as we are concerned as it will require
  manual intervention, major downtime and a partial re-architecture to
  resolve.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1069917/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1217927] Re: "nova-manage version" and "nova-manage --version" sometimes return different versions

2014-07-22 Thread Jay Pipes
Related bugs are marked Invalid. Patch is abandoned, so marking this
Invalid. David, please reset the status to something else if you
disagree. Thanks!

-jay

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1217927

Title:
  "nova-manage version" and "nova-manage --version" sometimes return
  different versions

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Downstream bug report
  https://bugzilla.redhat.com/show_bug.cgi?id=952811

  Seems that "nova-manage version" and "nova-manage --version" return
  the same thing in some environments but different things in other
  environments.

  This appears to break Tempest:

  http://www.mail-archive.com/openstack-dev@lists.openstack.org/msg03847.html
  https://bugs.launchpad.net/tempest/+bug/1214781

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1217927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300800] Re: Nova boot fails if sbin not in path

2014-07-22 Thread Angus Lees
** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300800

Title:
  Nova boot fails if sbin not in path

Status in devstack - openstack dev environments:
  New
Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Manuals:
  Confirmed

Bug description:
  In a fresh install of devstack I see an error in the nova-compute log
  when I try to start an instance. I tracked this down to coming from
  nova/linux_net.py where sysctl is called in _enable_ipv4_forwarding().

  If I add /sbin to my path the error goes away. However some distros,
  e.g.: debian, don't include sbin in the standard path rather they
  restrict it to root users only.

  I think the call to sysctl (and possibly other similar calls) should
  be moved to use rootwrap, along with preventing issues like this it
  would have the added benifit of making the code slightly more OS
  agnostic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1300800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347262] [NEW] Ldap Live test failures

2014-07-22 Thread Arun Kant
Public bug reported:

In keystone master, when live ldap test are executed against local
openldap instance, 7 tests are failing.

3 tests fail with following error.

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/mock.py", line 1201, in patched
return func(*args, **keywargs)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/tests/test_backend_ldap.py",
 line 1156, in test_chase_referrals_off
user_api.get_connection(user=None, password=None)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py",
 line 965, in get_connection
conn.simple_bind_s(user, password)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py",
 line 638, in simple_bind_s
serverctrls, clientctrls)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/tests/fakeldap.py",
 line 246, in simple_bind_s
attrs = self.db[self.key(who)]
AttributeError: 'FakeLdap' object has no attribute 'db'

The tests which are failing with above error are

LiveLDAPIdentity.test_chase_referrals_off
LiveLDAPIdentity.test_chase_referrals_on
LiveLDAPIdentity.test_debug_level_set

Reason: In FakeLdap, the livetest creds are different from
backend_ldap.conf and does not match at
https://github.com/openstack/keystone/blob/master/keystone/tests/fakeldap.py#L242


1 test fails with following error

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/mock.py", line 1201, in patched
return func(*args, **keywargs)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/tests/test_backend_ldap.py",
 line 1288, in test_user_mixed_case_attribute
user['email'])
KeyError: 'email'

Test failed:
LiveLDAPIdentity.test_user_mixed_case_attribute

Reason: CONF.ldap.user_mail_attribute is different in live test. Its
mail and not email as in backend_ldap.conf so test code needs to be
changed to handle both scenarios.

2 tests fails with following error

Traceback (most recent call last):
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/tests/test_backend_ldap.py",
 line 695, in test_user_id_comma
user = self.identity_api.driver.create_user(user_id, user)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/identity/backends/ldap.py",
 line 94, in create_user
user_ref = self.user.create(user)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/identity/backends/ldap.py",
 line 230, in create
values = super(UserApi, self).create(values)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py",
 line 1390, in create
ref = super(EnabledEmuMixIn, self).create(values)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py",
 line 1085, in create
conn.add_s(self._id_to_dn(values['id']), attrs)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py",
 line 656, in add_s
return self.conn.add_s(dn_utf8, ldap_attrs_utf8)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/common/ldap/core.py",
 line 551, in add_s
return self.conn.add_s(dn, modlist)
  File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 194, in add_s
return self.result(msgid,all=1,timeout=self.timeout)
  File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 422, in 
result
res_type,res_data,res_msgid = self.result2(msgid,all,timeout)
  File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 426, in 
result2
res_type, res_data, res_msgid, srv_ctrls = self.result3(msgid,all,timeout)
  File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 432, in 
result3
ldap_result = self._ldap_call(self._l.result3,msgid,all,timeout)
  File "/usr/lib/python2.7/dist-packages/ldap/ldapobject.py", line 96, in 
_ldap_call
result = func(*args,**kwargs)
UNDEFINED_TYPE: {'info': 'domain_id: AttributeDescription contains 
inappropriate characters', 'desc': 'Undefined attribute type'}

Test failed:
LiveLDAPIdentity.test_user_id_comma
LiveLDAPIdentity.test_user_id_comma_grants

Reason: Related test code creates user using driver instead of using identity 
api which filters domain_id from response.
Possible solution: Line 695 , 714, 749 in 
https://review.openstack.org/#/c/95300/19/keystone/tests/test_backend_ldap.py,cm


1 test fails with following error:

Traceback (most recent call last):
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/tests/test_backend_ldap.py",
 line 1267, in test_user_extra_attribute_mapping_description_is_returned
user = self.identity_api.create_user(user)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/notifications.py",
 line 75, in wrapper
result = f(*args, **kwargs)
  File 
"/home/arunkant-uws/myFolder/myWork/openstack/keystone/keystone/identity/core.py",
 line 182, in wrapper

[Yahoo-eng-team] [Bug 1334368] Re: HEAD and GET inconsistencies in Keystone

2014-07-22 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/104673
Committed: 
https://git.openstack.org/cgit/openstack/tempest/commit/?id=883311d64fd749315ee3639bff6c730e86026ac5
Submitter: Jenkins
Branch:master

commit 883311d64fd749315ee3639bff6c730e86026ac5
Author: Morgan Fainberg 
Date:   Thu Jul 3 13:13:10 2014 -0700

Re-enable 'check_trust_roles'

Re-enable the 'check_trust_roles' method. This can be
merged after the patches for change id
I13ce159cbe9739d4bf5d321fc4bd069245f32734 are merged
(master and stable/icehouse for Keystone).

This changeset updates the new location for the check_trust_role
HTTP status validation (in services/identity/v3/json/identity_client.py).
Previously this was located in identity/admin/v3/test_trusts.py.

Commented code indicating the above location change occurred has been
removed.

Change-Id: If1b7f18d7a357f4b3a4b478e300a17f2cc4a6159
Closes-Bug: #1334368


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1334368

Title:
  HEAD and GET inconsistencies in Keystone

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Committed
Status in Tempest:
  Fix Released

Bug description:
  While trying to convert Keystone to gate/check under mod_wsgi, it was
  noticed that occasionally a few HEAD calls were returning HTTP 200
  where under eventlet they consistently return HTTP 204.

  This is an inconsistency within Keystone. Based upon the RFC, HEAD
  should be identitcal to GET except that there is no body returned.
  Apache + MOD_WSGI in some cases converts a HEAD request to a GET
  request to the back-end wsgi application to avoid issues where the
  headers cannot be built to be sent as part of the response (this can
  occur when no content is returned from the wsgi app).

  This situation shows that Keystone should likely never build specific
  HEAD request methods and have HEAD simply call to the controller GET
  handler, the wsgi-layer should then simply remove the response body.

  This will help to simplify Keystone's code as well as mkae the API
  responses more consistent.

  Example Error in Gate:

  2014-06-25 05:20:37.820 | 
tempest.api.identity.admin.v3.test_trusts.TrustsV3TestJSON.test_trust_expire[gate,smoke]
  2014-06-25 05:20:37.820 | 

  2014-06-25 05:20:37.820 | 
  2014-06-25 05:20:37.820 | Captured traceback:
  2014-06-25 05:20:37.820 | ~~~
  2014-06-25 05:20:37.820 | Traceback (most recent call last):
  2014-06-25 05:20:37.820 |   File 
"tempest/api/identity/admin/v3/test_trusts.py", line 241, in test_trust_expire
  2014-06-25 05:20:37.820 | self.check_trust_roles()
  2014-06-25 05:20:37.820 |   File 
"tempest/api/identity/admin/v3/test_trusts.py", line 173, in check_trust_roles
  2014-06-25 05:20:37.821 | self.assertEqual('204', resp['status'])
  2014-06-25 05:20:37.821 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 321, in 
assertEqual
  2014-06-25 05:20:37.821 | self.assertThat(observed, matcher, message)
  2014-06-25 05:20:37.821 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 406, in 
assertThat
  2014-06-25 05:20:37.821 | raise mismatch_error
  2014-06-25 05:20:37.821 | MismatchError: '204' != '200'

  
  This is likely going to require changes to Keystone, Keystoneclient, Tempest, 
and possibly services that consume data from keystone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1334368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347236] [NEW] Tests fail due to keystoneclient 0.10 release

2014-07-22 Thread Jamie Lennox
Public bug reported:

Tests and the gate fail with errors that look like:

Traceback (most recent call last):
  File 
"/home/jamie/.virtualenvs/horizon/lib/python2.7/site-packages/nose/case.py", 
line 197, in runTest
self.test(*self.arg)
  File "/home/jamie/work/horizon/openstack_dashboard/test/test_data/utils.py", 
line 48, in load_test_data
return TestData(*loaders)
  File "/home/jamie/work/horizon/openstack_dashboard/test/test_data/utils.py", 
line 74, in __init__
data_func(self)
  File 
"/home/jamie/work/horizon/openstack_dashboard/test/test_data/ceilometer_data.py",
 line 56, in data
TEST.ceilometer_users.add(users.User(users.UserManager(None),
TypeError: __init__() takes exactly 3 arguments (2 given)

This is because the horizon test code constructs keystoneclient
Managers. These aren't used by horizon so can just be replaced with None
for test data.

** Affects: horizon
 Importance: Critical
 Assignee: Jamie Lennox (jamielennox)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347236

Title:
  Tests fail due to keystoneclient 0.10 release

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Tests and the gate fail with errors that look like:

  Traceback (most recent call last):
File 
"/home/jamie/.virtualenvs/horizon/lib/python2.7/site-packages/nose/case.py", 
line 197, in runTest
  self.test(*self.arg)
File 
"/home/jamie/work/horizon/openstack_dashboard/test/test_data/utils.py", line 
48, in load_test_data
  return TestData(*loaders)
File 
"/home/jamie/work/horizon/openstack_dashboard/test/test_data/utils.py", line 
74, in __init__
  data_func(self)
File 
"/home/jamie/work/horizon/openstack_dashboard/test/test_data/ceilometer_data.py",
 line 56, in data
  TEST.ceilometer_users.add(users.User(users.UserManager(None),
  TypeError: __init__() takes exactly 3 arguments (2 given)

  This is because the horizon test code constructs keystoneclient
  Managers. These aren't used by horizon so can just be replaced with
  None for test data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300800] Re: Nova boot fails if sbin not in path

2014-07-22 Thread Jay Pipes
** Changed in: nova
   Status: In Progress => Invalid

** Changed in: openstack-manuals
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300800

Title:
  Nova boot fails if sbin not in path

Status in OpenStack Compute (Nova):
  Invalid
Status in OpenStack Manuals:
  Confirmed

Bug description:
  In a fresh install of devstack I see an error in the nova-compute log
  when I try to start an instance. I tracked this down to coming from
  nova/linux_net.py where sysctl is called in _enable_ipv4_forwarding().

  If I add /sbin to my path the error goes away. However some distros,
  e.g.: debian, don't include sbin in the standard path rather they
  restrict it to root users only.

  I think the call to sysctl (and possibly other similar calls) should
  be moved to use rootwrap, along with preventing issues like this it
  would have the added benifit of making the code slightly more OS
  agnostic.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1300800/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340187] Re: paramiko EOFError in test_volume_boot_pattern

2014-07-22 Thread Matt Riedemann
message:"File \"tempest/scenario/test_volume_boot_pattern.py\"" AND
message:"in _ssh_to_server" AND message:"EOFError" AND
tags:"tempest.txt"

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmlsZSBcXFwidGVtcGVzdC9zY2VuYXJpby90ZXN0X3ZvbHVtZV9ib290X3BhdHRlcm4ucHlcXFwiXCIgQU5EIG1lc3NhZ2U6XCJpbiBfc3NoX3RvX3NlcnZlclwiIEFORCBtZXNzYWdlOlwiRU9GRXJyb3JcIiBBTkQgdGFnczpcInRlbXBlc3QudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDYwNjQxMzAyMjAsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

11 hits in 7 days, all failures, looks like it's only showing up in
neutron jobs.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340187

Title:
  paramiko EOFError in test_volume_boot_pattern

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  Seen here http://logs.openstack.org/09/106009/1/check/check-tempest-
  dsvm-neutron-icehouse/6b01164/console.html

  
  {0} 
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
 [108.840388s] ... FAILED
  2014-07-10 11:31:43.744 | {0} 
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
 [162.293278s] ... ok
  2014-07-10 11:31:44.928 | 
  2014-07-10 11:31:44.928 | 
  2014-07-10 11:31:44.929 | ==
  2014-07-10 11:31:44.929 | Failed 1 tests - output below:
  2014-07-10 11:31:44.929 | ==
  2014-07-10 11:31:44.929 | 
  2014-07-10 11:31:44.930 | 
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern[compute,image,volume]
  2014-07-10 11:31:44.930 | 
--
  2014-07-10 11:31:44.930 | 
  2014-07-10 11:31:44.930 | Captured traceback:
  2014-07-10 11:31:44.931 | ~~~
  2014-07-10 11:31:44.931 | Traceback (most recent call last):
  2014-07-10 11:31:44.931 |   File "tempest/test.py", line 128, in wrapper
  2014-07-10 11:31:44.931 | return f(self, *func_args, **func_kwargs)
  2014-07-10 11:31:44.931 |   File 
"tempest/scenario/test_volume_boot_pattern.py", line 163, in 
test_volume_boot_pattern
  2014-07-10 11:31:44.931 | keypair)
  2014-07-10 11:31:44.931 |   File 
"tempest/scenario/test_volume_boot_pattern.py", line 116, in _ssh_to_server
  2014-07-10 11:31:44.931 | private_key=keypair.private_key)
  2014-07-10 11:31:44.931 |   File "tempest/scenario/manager.py", line 453, 
in get_remote_client
  2014-07-10 11:31:44.931 | linux_client.validate_authentication()
  2014-07-10 11:31:44.931 |   File 
"tempest/common/utils/linux/remote_client.py", line 53, in 
validate_authentication
  2014-07-10 11:31:44.932 | self.ssh_client.test_connection_auth()
  2014-07-10 11:31:44.932 |   File "tempest/common/ssh.py", line 150, in 
test_connection_auth
  2014-07-10 11:31:44.932 | connection = self._get_ssh_connection()
  2014-07-10 11:31:44.932 |   File "tempest/common/ssh.py", line 75, in 
_get_ssh_connection
  2014-07-10 11:31:44.932 | timeout=self.channel_timeout, 
pkey=self.pkey)
  2014-07-10 11:31:44.932 |   File 
"/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 242, in 
connect
  2014-07-10 11:31:44.932 | t.start_client()
  2014-07-10 11:31:44.932 |   File 
"/usr/local/lib/python2.7/dist-packages/paramiko/transport.py", line 346, in 
start_client
  2014-07-10 11:31:44.932 | raise e
  2014-07-10 11:31:44.932 | EOFError
  2014-07-10 11:31:44.932 | 
  2014-07-10 11:31:44.933 | 

  Appears similar to https://bugs.launchpad.net/tempest/+bug/1295808

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1340187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347156] [NEW] deleting floating-ip in nova-network does not free quota

2014-07-22 Thread David Kranz
Public bug reported:

It seems that when you allocate a floating-ip in a tenant with nova-
network, its quota is never returned after calling 'nova floating-ip-
delete' ecen though 'nova floating-ip-list' shows it gone. This behavior
applies to each tenant individually. The gate tests are passing because
they all run with tenant isolation. But this problem shows in the
nightly run without tenant isolation:

http://logs.openstack.org/periodic-qa/periodic-tempest-dsvm-full-non-
isolated-master/2bc5ead/console.html

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347156

Title:
  deleting floating-ip in nova-network does not free quota

Status in OpenStack Compute (Nova):
  New

Bug description:
  It seems that when you allocate a floating-ip in a tenant with nova-
  network, its quota is never returned after calling 'nova floating-ip-
  delete' ecen though 'nova floating-ip-list' shows it gone. This
  behavior applies to each tenant individually. The gate tests are
  passing because they all run with tenant isolation. But this problem
  shows in the nightly run without tenant isolation:

  http://logs.openstack.org/periodic-qa/periodic-tempest-dsvm-full-non-
  isolated-master/2bc5ead/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347156/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1273988] Re: keystoneclient requires --pass to create user while keystone doesn't

2014-07-22 Thread Dolph Mathews
** Changed in: python-keystoneclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1273988

Title:
  keystoneclient requires --pass to create user while keystone doesn't

Status in OpenStack Identity (Keystone):
  Invalid
Status in Python client library for Keystone:
  Fix Released

Bug description:
  name is required in REST API, but CLI requires an extra argument
  --pass

  # uname -a
  Linux havana 3.8.0-29-generic #42~precise1-Ubuntu SMP Wed Aug 14 16:19:23 UTC 
2013 x86_64 x86_64 x86_64 GNU/Linux

  # keystone-manage --version
  2013.2

  # keystone --version
  0.3.2

  # curl -i -X POST http://160.132.0.17:35357/v2.0/users -H "User-Agent: 
python-keystoneclient" -H "Content-Type: application/json" -H "X-Auth-Token: 
ac60d12b5b6c668f726a" -d '{"user": {"name": "test-create"}}'
  HTTP/1.1 200 OK
  Vary: X-Auth-Token
  Content-Type: application/json
  Content-Length: 92
  Date: Wed, 29 Jan 2014 07:27:09 GMT

  {"user": {"enabled": true, "name": "test-create", "id":
  "f23d8e2835a0491db1f13a313446768d"}}

  # keystone user-create --name test-create
  Expecting to find string in password. The server could not comply with the 
request since it is either malformed or otherwise incorrect. The client is 
assumed to be in error. (HTTP 400)

  if keystone cli supports creating an user without pass, we can update
  that user's password by:

  # keystone user-password-update test-create --pass xxx

  to verify this solution:

  # keystone user-role-add --user test-create --tenant admin --role admin # can 
be other tenant and role
  # keystone --os-username test-create --os-password xxx --os-tenant-name admin 
user-get test-create
  +--+--+
  | Property |  Value   |
  +--+--+
  | enabled  |   True   |
  |id| f23d8e2835a0491db1f13a313446768d |
  |   name   |   test-create|
  +--+--+

  the problem is that

  # keystone --debug user-create --name test-create
  WARNING: Bypassing authentication using a token & endpoint (authentication 
credentials are being ignored).
  REQ: curl -i -X POST http://160.132.0.17:35357/v2.0/users -H "User-Agent: 
python-keystoneclient" -H "Content-Type: application/json" -H "X-Auth-Token: 
ac60d12b5b6c668f726a"
  REQ BODY: {"user": {"email": null, "password": null, "enabled": true, "name": 
"test-create", "tenantId": null}}

  RESP: [400] CaseInsensitiveDict({'date': 'Wed, 29 Jan 2014 07:43:58 GMT', 
'vary': 'X-Auth-Token', 'content-length': '236', 'content-type': 
'application/json'})
  RESP BODY: {"error": {"message": "Expecting to find string in password. The 
server could not comply with the request since it is either malformed or 
otherwise incorrect. The client is assumed to be in error.", "code": 400, 
"title": "Bad Request"}}

  the server side can bare pass attribute is not set, but cannot accept
  it is None

  we can fix this via set --pass to SUPRESS or update the server side
  validation to treat None as not set and leave it blank in db backend,
  I would prefer fix both side, since some user may claim such problem
  when he try to send a rest.json={..., 'pass': null} to server.

  ref:
  
https://github.com/openstack/keystone/blob/2013.2.1/keystone/common/utils.py#L100

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1273988/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311778] Re: Unit tests fail with MessagingTimeout errors

2014-07-22 Thread Joe Gordon
the exception is coming from oslo.messaging:
http://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo/messaging/_drivers/impl_fake.py#n184

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1311778

Title:
  Unit tests fail with MessagingTimeout errors

Status in OpenStack Compute (Nova):
  In Progress
Status in Messaging API for OpenStack:
  New

Bug description:
  There is an issue that is causing unit tests to fail with the
  following error:

  MessagingTimeout: No reply on topic conductor
  MessagingTimeout: No reply on topic scheduler

  2014-04-23 13:45:52.017 | Traceback (most recent call last):
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 133, in _dispatch_and_reply
  2014-04-23 13:45:52.017 | incoming.message))
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 176, in _dispatch
  2014-04-23 13:45:52.017 | return self._do_dispatch(endpoint, method, 
ctxt, args)
  2014-04-23 13:45:52.017 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 122, in _do_dispatch
  2014-04-23 13:45:52.017 | result = getattr(endpoint, method)(ctxt, 
**new_args)
  2014-04-23 13:45:52.018 |   File "nova/conductor/manager.py", line 798, in 
build_instances
  2014-04-23 13:45:52.018 | legacy_bdm_in_spec=legacy_bdm)
  2014-04-23 13:51:50.628 |   File "nlibvir:  error : internal error could not 
initialize domain event timer
  2014-04-23 13:54:57.953 | ova/scheduler/rpcapi.py", line 120, in run_instance
  2014-04-23 13:54:57.953 | cctxt.cast(ctxt, 'run_instance', **msg_kwargs)
  2014-04-23 13:54:57.953 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/rpc/client.py",
 line 150, in call
  2014-04-23 13:54:57.953 | wait_for_reply=True, timeout=timeout)
  2014-04-23 13:54:57.953 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/transport.py",
 line 90, in _send
  2014-04-23 13:54:57.953 | timeout=timeout)
  2014-04-23 13:54:57.954 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 166, in send
  2014-04-23 13:54:57.954 | return self._send(target, ctxt, message, 
wait_for_reply, timeout)
  2014-04-23 13:54:57.954 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/oslo/messaging/_drivers/impl_fake.py",
 line 161, in _send
  2014-04-23 13:54:57.954 | 'No reply on topic %s' % target.topic)
  2014-04-23 13:54:57.954 | MessagingTimeout: No reply on topic scheduler

  

  2014-04-23 13:45:52.008 | Traceback (most recent call last):
  2014-04-23 13:45:52.008 |   File "nova/api/openstack/__init__.py", line 125, 
in __call__
  2014-04-23 13:45:52.008 | return req.get_response(self.application)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py",
 line 1320, in send
  2014-04-23 13:45:52.009 | application, catch_exc_info=False)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/request.py",
 line 1284, in call_application
  2014-04-23 13:45:52.009 | app_iter = application(self.environ, 
start_response)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.009 | return resp(environ, start_response)
  2014-04-23 13:45:52.009 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/webob/dec.py",
 line 144, in __call__
  2014-04-23 13:45:52.010 | return resp(environ, start_response)
  2014-04-23 13:45:52.010 |   File 
"/home/jenkins/workspace/gate-nova-python26/.tox/py26/lib/python2.6/site-packages/routes/middleware.py",
 line 131, in __call__
  2014-04-23 13:45:52.010 | response = self.app(environ, start_response)
  2014-04-23 13:45:52.011 |   File 
"/home/jenkins/w

[Yahoo-eng-team] [Bug 1342274] Re: auth_token middleware in keystoneclient is deprecated

2014-07-22 Thread Sam Leong
** Also affects: designate
   Importance: Undecided
   Status: New

** Changed in: designate
 Assignee: (unassigned) => Sam Leong (chio-fai-sam-leong)

** Changed in: designate
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1342274

Title:
  auth_token middleware in keystoneclient is deprecated

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Orchestration API (Heat):
  In Progress
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  In Progress
Status in OpenStack Message Queuing Service (Marconi):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Python client library for Keystone:
  Fix Committed
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  In Progress
Status in Openstack Database (Trove):
  In Progress

Bug description:
  
  The auth_token middleware in keystoneclient is deprecated and will only get 
security updates. Projects should use the auth_token middleware in 
keystonemiddleware.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1342274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1344513] Re: test_cinder_encryption_type_list fails

2014-07-22 Thread Joe Gordon
** Also affects: cinder
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1344513

Title:
  test_cinder_encryption_type_list fails

Status in Cinder:
  New

Bug description:
  Seeing this at the gate:

  2014-07-18 21:01:58.253 | ==
  2014-07-18 21:01:58.254 | Failed 1 tests - output below:
  2014-07-18 21:01:58.254 | ==
  2014-07-18 21:01:58.254 | 
  2014-07-18 21:01:58.254 | 
tempest.cli.simple_read_only.test_cinder.SimpleReadOnlyCinderClientTest.test_cinder_encryption_type_list
  2014-07-18 21:01:58.254 | 

  2014-07-18 21:01:58.254 | 
  2014-07-18 21:01:58.254 | Captured traceback:
  2014-07-18 21:01:58.254 | ~~~
  2014-07-18 21:01:58.254 | Traceback (most recent call last):
  2014-07-18 21:01:58.254 |   File 
"tempest/cli/simple_read_only/test_cinder.py", line 144, in 
test_cinder_encryption_type_list
  2014-07-18 21:01:58.254 | encrypt_list = 
self.parser.listing(self.cinder('encryption-type-list'))
  2014-07-18 21:01:58.254 |   File "tempest/cli/__init__.py", line 84, in 
cinder
  2014-07-18 21:01:58.255 | 'cinder', action, flags, params, admin, 
fail_ok)
  2014-07-18 21:01:58.255 |   File "tempest/cli/__init__.py", line 116, in 
cmd_with_auth
  2014-07-18 21:01:58.255 | return self.cmd(cmd, action, flags, params, 
fail_ok, merge_stderr)
  2014-07-18 21:01:58.255 |   File "tempest/cli/__init__.py", line 136, in 
cmd
  2014-07-18 21:01:58.255 | stderr=result_err)
  2014-07-18 21:01:58.255 | CommandFailed: Command 
'['/usr/local/bin/cinder', '--os-username', 'admin', '--os-tenant-name', 
'admin', '--os-password', 'secretadmin', '--os-auth-url', 
'http://127.0.0.1:5000/v2.0/', '--endpoint-type', 'publicURL', 
'encryption-type-list']' returned non-zero exit status 1
  2014-07-18 21:01:58.255 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1344513/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347078] [NEW] NSX: set_auth_cookie() is called twice when 401 occurs

2014-07-22 Thread Aaron Rosen
Public bug reported:

NSX: set_auth_cookie() is called twice when 401 occurs

** Affects: neutron
 Importance: Low
 Assignee: Aaron Rosen (arosen)
 Status: In Progress


** Tags: vmware

** Changed in: neutron
 Assignee: (unassigned) => Aaron Rosen (arosen)

** Changed in: neutron
   Importance: Undecided => Low

** Tags added: nicira

** Tags removed: nicira
** Tags added: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1347078

Title:
  NSX: set_auth_cookie() is called twice when 401 occurs

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  NSX: set_auth_cookie() is called twice when 401 occurs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1347078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347072] [NEW] Assign js init methods with names

2014-07-22 Thread Thai Tran
Public bug reported:

Currently, we have old javascripts that automatically  get initialized
when the document is ready. This patch provides a mechanism to access
those initialization code by assigning them a name.

Why do we need this? If we are to load things asynchronously (as in the
case of angular), we need a mechanism to postpone initialization code
until the data/layouts/etc.. are retrieved and rendered. This would also
give us control over init code (not all are required all the time).

** Affects: horizon
 Importance: Undecided
 Assignee: Thai Tran (tqtran)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347072

Title:
  Assign js init methods with names

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Currently, we have old javascripts that automatically  get initialized
  when the document is ready. This patch provides a mechanism to access
  those initialization code by assigning them a name.

  Why do we need this? If we are to load things asynchronously (as in
  the case of angular), we need a mechanism to postpone initialization
  code until the data/layouts/etc.. are retrieved and rendered. This
  would also give us control over init code (not all are required all
  the time).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347071] [NEW] HTMLElement final class string getter

2014-07-22 Thread Thai Tran
Public bug reported:

The class_string property does NOT return the final class string. This
is evident in some of the BatchAction classes (which is a subclass). The
only way to get the final class string is through the get_final_attrs
method.

In order to retrieve the final class string, we would need to calculate
the final attributes. This can be cumbersome if all we want is the final
class string. We need a mechanism that would allow us to retrieve just
the final class independently.

** Affects: horizon
 Importance: Undecided
 Assignee: Thai Tran (tqtran)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347071

Title:
  HTMLElement final class string getter

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The class_string property does NOT return the final class string. This
  is evident in some of the BatchAction classes (which is a subclass).
  The only way to get the final class string is through the
  get_final_attrs method.

  In order to retrieve the final class string, we would need to
  calculate the final attributes. This can be cumbersome if all we want
  is the final class string. We need a mechanism that would allow us to
  retrieve just the final class independently.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347071/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347067] [NEW] [Bootstrap] membership widget needs styling

2014-07-22 Thread Cindy Lu
Public bug reported:

Edit Flavor > Flavor Access tab

- two filter text fields are styled differently
- the magnifying glass icon is cut off
- header bar not aligned with row items
- more top padding for description

See attached image.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Untitled.png"
   
https://bugs.launchpad.net/bugs/1347067/+attachment/4160150/+files/Untitled.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347067

Title:
  [Bootstrap] membership widget needs styling

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Edit Flavor > Flavor Access tab

  - two filter text fields are styled differently
  - the magnifying glass icon is cut off
  - header bar not aligned with row items
  - more top padding for description

  See attached image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347039] [NEW] VMWare: available disk spaces(hypervisor-list) only based on a single datastore instead of all available datastores from cluster

2014-07-22 Thread zhu zhu
Public bug reported:

Currently the with vmware backend nova hypervisor-list. 
The local_gb, free_disk_gb, local_gb_used  are not displaying correct values if 
the compute node(cluster) have more than 1 datastores. Currently the code only 
report the resource update picking up the first datastore which is incorrect. 

The real situation is for example, 20 datastores availabel for the
compute cluster node, but it only calculate the 1 for resource report.
Which will cause easily deployment failure saying no freespace, in fact
there are enough disk for vm deployments.


[root@RegionServer1 nova]# nova hypervisor-show 1   
+---+
| Property  | Value 

 
+---+-
| cpu_info_model| ["Intel(R) Xeon(R) CPU   X5675  @ 
3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) 
CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 
3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) 
CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 
3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) 
CPU 
 X5675  @ 3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz", 
"Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz", "Intel(R) Xeon(R) CPU
   X5675  @ 3.07GHz", "Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz"] | | 
cpu_info_topology_cores   | 156 

   
| cpu_info_topology_threads | 312   

 
| cpu_info_vendor"IBM"] 
   
| current_workload  | 0 

 
| disk_available_least  | - 

 
| free_disk_gb  | -2682 

 
| free_ram_mb   | 1545886   

 
| host_ip   | 172.18.152.120

 
| hypervisor_hostname   | domain-c661(BC1-Cluster)  

 
| hypervisor_type   | VMware vCenter Server 

 
| hypervisor_version| 5001000   

 
| id| 1 

 
| local_gb  | 1799  

 
| local_gb_used | 4481  


| memory_mb | 1833630   

 
| memory_mb_used| 287744


[Yahoo-eng-team] [Bug 1347036] [NEW] [data processing] Register Image form is broken

2014-07-22 Thread Chad Roberts
Public bug reported:

When clicking on the Register Image button in the Data Processing ->
Image Registry, the user gets an error message.

It looks like the glance api returns an extra value that is unexpected.

Please fix this and add a test to cover this situation for the future.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347036

Title:
  [data processing] Register Image form is broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When clicking on the Register Image button in the Data Processing ->
  Image Registry, the user gets an error message.

  It looks like the glance api returns an extra value that is
  unexpected.

  Please fix this and add a test to cover this situation for the future.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347036/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347028] [NEW] block_device mapping identifies ephemeral disks incorrectly

2014-07-22 Thread Vish Ishaya
Public bug reported:

Ephemeral drives are destinaton == local, but the new bdm code bases it
on source instead.  This leads to improper errors:

$ nova boot --flavor m1.tiny --block-device 
source=blank,dest=volume,bus=virtio,size=1,bootindex=0 test
ERROR (BadRequest): Ephemeral disks requested are larger than the instance type 
allows. (HTTP 400) (Request-ID: req-53247c8e-d14e-43e2-b01e-85b49f520e61)

The code is here:

https://github.com/openstack/nova/blob/106fb458c7ac3cc17bb42d1b83ec3f4fa8284e71/nova/block_device.py#L411

This should be checking destination_type == 'local' instead of source
type.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: icehouse-backport-potential low-hanging-fruit

** Tags added: icehouse-backport-potential low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347028

Title:
  block_device mapping identifies ephemeral disks incorrectly

Status in OpenStack Compute (Nova):
  New

Bug description:
  Ephemeral drives are destinaton == local, but the new bdm code bases
  it on source instead.  This leads to improper errors:

  $ nova boot --flavor m1.tiny --block-device 
source=blank,dest=volume,bus=virtio,size=1,bootindex=0 test
  ERROR (BadRequest): Ephemeral disks requested are larger than the instance 
type allows. (HTTP 400) (Request-ID: req-53247c8e-d14e-43e2-b01e-85b49f520e61)

  The code is here:

  
https://github.com/openstack/nova/blob/106fb458c7ac3cc17bb42d1b83ec3f4fa8284e71/nova/block_device.py#L411

  This should be checking destination_type == 'local' instead of source
  type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347025] [NEW] Iscsi connector always uses CONF.my_ip

2014-07-22 Thread Cory Stone
Public bug reported:

When attaching to a cinder volume, the virt drivers supply details about
where the iscsi connection is going to come from. However, if your
compute nodes have multiple network interfaces, there is no way to
specify which one is going to be used for the iscsi traffic.

It would be helpful if at least a config option allowed specifying the
storage ip.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1347025

Title:
  Iscsi connector always uses CONF.my_ip

Status in OpenStack Compute (Nova):
  New

Bug description:
  When attaching to a cinder volume, the virt drivers supply details
  about where the iscsi connection is going to come from. However, if
  your compute nodes have multiple network interfaces, there is no way
  to specify which one is going to be used for the iscsi traffic.

  It would be helpful if at least a config option allowed specifying the
  storage ip.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1347025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347002] [NEW] [Bootstrap] instance details Log and Console blank

2014-07-22 Thread Cindy Lu
Public bug reported:

Merging of https://review.openstack.org/#/c/107042/ results in:

Instance Details Log and Console tab being blank...

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

- https://review.openstack.org/#/c/107042/
+ Merging of https://review.openstack.org/#/c/107042/ results in:
  
- Instance Details Log and Console tab seem to be blank... see image.
+ Instance Details Log and Console tab being blank...

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1347002

Title:
  [Bootstrap] instance details Log and Console blank

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Merging of https://review.openstack.org/#/c/107042/ results in:

  Instance Details Log and Console tab being blank...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1347002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285478] Re: Enforce alphabetical ordering in requirements file

2014-07-22 Thread Kurt Griffiths
Won't fix, as this will cause other issues per
https://github.com/openstack/oslo-
incubator/commit/93db699a99793288464fe56de27bcf676fcedd17

** Changed in: marconi
Milestone: juno-2 => None

** Changed in: marconi
   Status: In Progress => Won't Fix

** Changed in: marconi
   Importance: Low => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1285478

Title:
  Enforce alphabetical ordering in requirements file

Status in Blazar:
  Triaged
Status in Cinder:
  Invalid
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Won't Fix
Status in OpenStack Identity (Keystone):
  In Progress
Status in OpenStack Message Queuing Service (Marconi):
  Won't Fix
Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Glance:
  In Progress
Status in Python client library for Ironic:
  Fix Committed
Status in Python client library for Neutron:
  Invalid
Status in Trove client binding:
  In Progress
Status in OpenStack contribution dashboard:
  Fix Released
Status in Storyboard database creator:
  In Progress
Status in Tempest:
  In Progress
Status in Openstack Database (Trove):
  In Progress
Status in Tuskar:
  Fix Released

Bug description:
  
  Sorting requirement files in alphabetical order makes code more readable, and 
can check whether specific library
  in the requirement files easily. Hacking donesn't check *.txt files.
  We had  enforced  this check in oslo-incubator 
https://review.openstack.org/#/c/66090/.

  This bug is used to track syncing the check gating.

  How to sync this to other projects:

  1.  Copy  tools/requirements_style_check.sh  to project/tools.

  2. run tools/requirements_style_check.sh  requirements.txt test-
  requirements.txt

  3. fix the violations

To manage notifications about this bug go to:
https://bugs.launchpad.net/blazar/+bug/1285478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346969] [NEW] ML2 Cisco Nexus MD: Don't delete a VLAN with existing Configuration

2014-07-22 Thread Abishek Subramanian
Public bug reported:

If an existing configuration is already present on a switchport VLAN
interface before Openstack adds the VLAN, then, at the time of deleting
the Openstack VLAN, do not delete the config already present.

** Affects: neutron
 Importance: Undecided
 Assignee: Abishek Subramanian (absubram)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Abishek Subramanian (absubram)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346969

Title:
  ML2 Cisco Nexus MD: Don't delete a VLAN with existing Configuration

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If an existing configuration is already present on a switchport VLAN
  interface before Openstack adds the VLAN, then, at the time of
  deleting the Openstack VLAN, do not delete the config already present.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346969/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346966] [NEW] Heal migration fails if MySQL is using MyISAM engine

2014-07-22 Thread Ann Kamyshnikova
Public bug reported:

If in MySQL default engine is set to  MyISAM, heal migration fails with
such error log http://paste.openstack.org/show/87625/

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New


** Tags: db

** Changed in: neutron
 Assignee: (unassigned) => Ann Kamyshnikova (akamyshnikova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346966

Title:
  Heal migration fails if MySQL is using MyISAM engine

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If in MySQL default engine is set to  MyISAM, heal migration fails
  with such error log http://paste.openstack.org/show/87625/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316475] Re: [SRU] CloudSigma DS for causes hangs when serial console present

2014-07-22 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.5-0ubuntu1.1

---
cloud-init (0.7.5-0ubuntu1.1) trusty-proposed; urgency=medium

  [ Ben Howard ]
  * debian/patches/lp1316475-1303986-cloudsigma.patch: Backport of
CloudSigma Datasource from 14.10
- [FFe] Support VendorData for CloudSigma (LP: #1303986).
- Only query /dev/ttys1 when CloudSigma is detected (LP: #1316475).

  [ Scott Moser ]
  * debian/cloud-init.templates: fix choices so dpkg-reconfigure works as
expected (LP: #1325746)
 -- Scott MoserFri, 20 Jun 2014 13:29:29 -0400

** Changed in: cloud-init (Ubuntu Trusty)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1316475

Title:
  [SRU] CloudSigma DS for causes hangs when serial console present

Status in Init scripts for use on cloud images:
  Fix Committed
Status in Openstack disk image builder:
  Fix Released
Status in tripleo - openstack on openstack:
  Invalid
Status in “cloud-init” package in Ubuntu:
  Fix Released
Status in “cloud-init” source package in Trusty:
  Fix Released

Bug description:
  SRU Justification

  Impact: The Cloud Sigma Datasource read and writes to /dev/ttyS1 if
  present; the Datasource does not have a time out. On non-CloudSigma
  Clouds or systems w/ /dev/ttyS1, Cloud-init will block pending a
  response, which may never come. Further, it is dangerous for a default
  datasource to write blindly on a serial console as other control plane
  software and Clouds use /dev/ttyS1 for communication.

  Fix: The patch queries the BIOS to see if the instance is running on
  CloudSigma before querying /dev/ttys1.

  Verification: On both a CloudSigma instance and non-CloudSigma instance with 
/dev/ttys1:
  1. Install new cloud-init
  2. Purge existing cloud-init data (rm -rf /var/lib/cloud)
  3. Run "cloud-init --debug init"
  4. Confirm that CloudSigma provisioned while CloudSigma datasource skipped 
non-CloudSigma instance

  Regression: The risk is low, as this change further restrict where the
  CloudSigma Datasource can run.

  [Original Report]
  DHCPDISCOVER on eth2 to 255.255.255.255 port 67 interval 3 (xid=0x7e777c23)
  DHCPREQUEST of 10.22.157.186 on eth2 to 255.255.255.255 port 67 
(xid=0x7e777c23)
  DHCPOFFER of 10.22.157.186 from 10.22.157.149
  DHCPACK of 10.22.157.186 from 10.22.157.149
  bound to 10.22.157.186 -- renewal in 39589 seconds.
   * Starting Mount network filesystems[ OK 
]
   * Starting configure network device [ OK 
]
   * Stopping Mount network filesystems[ OK 
]
   * Stopping DHCP any connected, but unconfigured network interfaces  [ OK 
]
   * Starting configure network device [ OK 
]
   * Stopping DHCP any connected, but unconfigured network interfaces  [ OK 
]
   * Starting configure network device [ OK 
]

  And it stops there.

  I see this on about 10% of deploys.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1316475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346931] [NEW] [data processing] Add data sources from job launch

2014-07-22 Thread Chad Roberts
Public bug reported:

*wishlist for juno*

As discussed at the Atlanta design summit, the launch job dialog prompts
the user to select data sources to use as input and output for the job.
As a user, it is easy to forget to create those data sources ahead of
time and it's annoying to have to back out, create them and then go back
to the job and start the launch process again.

It was suggested that the select boxes for inputs be changed to include
the [+] button that will allow the user to create a new data source 'on-
the-fly' for use in the job launch and in the future.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: sahara

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1346931

Title:
  [data processing] Add data sources from job launch

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  *wishlist for juno*

  As discussed at the Atlanta design summit, the launch job dialog
  prompts the user to select data sources to use as input and output for
  the job.  As a user, it is easy to forget to create those data sources
  ahead of time and it's annoying to have to back out, create them and
  then go back to the job and start the launch process again.

  It was suggested that the select boxes for inputs be changed to
  include the [+] button that will allow the user to create a new data
  source 'on-the-fly' for use in the job launch and in the future.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1346931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346932] [NEW] delete floating ip via neutron port-delete

2014-07-22 Thread Joe Talerico
Public bug reported:

When running neutron port-delete  I get a traceback
referencing :

2014-07-21 16:34:28.769 31455 TRACE neutron.api.v2.resource DBError:
(IntegrityError) (1451, 'Cannot delete or update a parent row: a foreign
key constraint fails (`neutron`.`floatingips`, CONSTRAINT
`floatingips_ibfk_2` FOREIGN KEY (`floating_port_id`) REFERENCES `ports`
(`id`))') 'DELETE FROM ports WHERE ports.id = %s' ('25c9a306-6f5f-4630
-99ec-78893b1e766a',)

Instead of dumping a unhelpful trace to the logs, shouldn't there be a
message to the user that they should use the right command to remove the
floating IP port?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346932

Title:
  delete floating ip via neutron port-delete

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When running neutron port-delete  I get a traceback
  referencing :

  2014-07-21 16:34:28.769 31455 TRACE neutron.api.v2.resource DBError:
  (IntegrityError) (1451, 'Cannot delete or update a parent row: a
  foreign key constraint fails (`neutron`.`floatingips`, CONSTRAINT
  `floatingips_ibfk_2` FOREIGN KEY (`floating_port_id`) REFERENCES
  `ports` (`id`))') 'DELETE FROM ports WHERE ports.id = %s'
  ('25c9a306-6f5f-4630-99ec-78893b1e766a',)

  Instead of dumping a unhelpful trace to the logs, shouldn't there be a
  message to the user that they should use the right command to remove
  the floating IP port?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346932/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346902] [NEW] update router failure

2014-07-22 Thread Zang MingJie
Public bug reported:

If update a router created before dvr patch, following exception occurs,
because router['extra_attributes'] is None

2014-07-22 20:55:11.682 ERROR neutron.api.v2.resource 
[req-51908a32-4d89-41f6-ae55-c96d77e55e5c admin 
17f0aea3b612493b959ca810fafe13e7] update failed
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 87, in resource
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 529, in update
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/extraroute_db.py", line 74, in update_router
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource context, id, router)
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_db.py", line 173, in update_router
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource router_db = 
self._update_router_db(context, id, r, gw_info)
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 94, in _update_router_db
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource 
self._validate_router_migration(router_db, data)
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 71, in 
_validate_router_migration
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource if 
(router_db.extra_attributes.distributed and
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource AttributeError: 
'NoneType' object has no attribute 'distributed'
2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource

** Affects: neutron
 Importance: Undecided
 Assignee: Zang MingJie (zealot0630)
 Status: In Progress

** Description changed:

- If update a router created before dvr patch, following exception occurs
- 
+ If update a router created before dvr patch, following exception occurs,
+ because router['extra_attributes'] is None
  
  2014-07-22 20:55:11.682 ERROR neutron.api.v2.resource 
[req-51908a32-4d89-41f6-ae55-c96d77e55e5c admin 
17f0aea3b612493b959ca810fafe13e7] update failed
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 87, in resource
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 529, in update
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/extraroute_db.py", line 74, in update_router
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource context, id, router)
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_db.py", line 173, in update_router
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource router_db = 
self._update_router_db(context, id, r, gw_info)
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 94, in _update_router_db
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource 
self._validate_router_migration(router_db, data)
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/l3_dvr_db.py", line 71, in 
_validate_router_migration
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource if 
(router_db.extra_attributes.distributed and
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource AttributeError: 
'NoneType' object has no attribute 'distributed'
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346902

Title:
  update router failure

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  If update a router created before dvr patch, following exception
  occurs, because router['extra_attributes'] is None

  2014-07-22 20:55:11.682 ERROR neutron.api.v2.resource 
[req-51908a32-4d89-41f6-ae55-c96d77e55e5c admin 
17f0aea3b612493b959ca810fafe13e7] update failed
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2014-07-22 20:55:11.682 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 87, in resourc

[Yahoo-eng-team] [Bug 1346900] [NEW] Tables tz_network_bindings and tz_network_bindings use nullable use nullable on primary keys

2014-07-22 Thread Jakub Libosvar
Public bug reported:

Primary key cannot be null therefore setting NULL on column that is
primary key is wrong and then generates wrong migration script using
autogenerate.

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Jakub Libosvar (libosvar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346900

Title:
  Tables tz_network_bindings and tz_network_bindings use nullable use
  nullable on primary keys

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Primary key cannot be null therefore setting NULL on column that is
  primary key is wrong and then generates wrong migration script using
  autogenerate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346866] [NEW] EndpointNotFound when deleting volume backended instance

2014-07-22 Thread Liyingjun
Public bug reported:

When booting a volume backend instance, there may be an error because of
volume creating error. and the following error occur when deleting the
instance:

2014-07-22 11:19:15.305 14601 ERROR nova.compute.manager [-] [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] Failed to complete a deletion
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] Traceback (most recent call last):
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 845, in 
_init_instance
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] self._delete_instance(context, 
instance, bdms)
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 103, in inner
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] rv = f(*args, **kwargs)
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2220, in 
_delete_instance
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] user_id=user_id)
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] six.reraise(self.type_, self.value, 
self.tb)
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2190, in 
_delete_instance
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] self._shutdown_instance(context, 
db_inst, bdms)
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2136, in 
_shutdown_instance
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] connector)
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 174, in wrapper
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] res = method(self, ctx, volume_id, 
*args, **kwargs)
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 276, in 
terminate_connection
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] return 
cinderclient(context).volumes.terminate_connection(volume_id,
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 92, in 
cinderclient
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] endpoint_type=endpoint_type)
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]   File 
"/usr/lib/python2.7/dist-packages/cinderclient/service_catalog.py", line 80, in 
url_for
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] raise 
cinderclient.exceptions.EndpointNotFound()
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] EndpointNotFound
2014-07-22 11:19:15.305 14601 TRACE nova.compute.manager [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815]

** Affects: nova
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346866

Title:
  EndpointNotFound when deleting volume backended instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  When booting a volume backend instance, there may be an error because
  of volume creating error. and the following error occur when deleting
  the instance:

  2014-07-22 11:19:15.305 14601 ERROR nova.compute.manager [-] [instance: 
dc0d5ad1-caa7-4160-9f6f-158ca1193815] Failed to complet

[Yahoo-eng-team] [Bug 1346861] [NEW] l3 cannot re-create device in deleted namespace

2014-07-22 Thread Jun Wu
Public bug reported:

If an ovs-managed device (device created by add-port followed by set
type=internal)'s namespace is being used by some process and then
deleted, L3 agent will fail to re-create the device.

Steps to repro:

- Stop l3-agent.
- Choose a router namespace with at least one ovs-managed device in it. For 
example, "qrouter-df5a3693-ec4d-4023-9e73-8dce9c4ac184" has a device 
"qg-df5a3693-ec"
- Ensure the namespace is used by at least one process. For demo purpose, start 
another shell using "ip netns exec qrouter-df5a3693-ec4d-4023-9e73-8dce9c4ac184 
bash". In reality, ns-metadata-proxy or keepalived may live in the namespace
- Delete the namespace by "ip netns del 
qrouter-df5a3693-ec4d-4023-9e73-8dce9c4ac184". The command won't fail and the 
devices in the deleted namespace are still alive, observable by "ip link" in 
previously opened shell. However, there is no easy method to enter the 
namespace from outside again.
- Start l3 agent.
- Verify "qg-df5a3693-ec" cannot be recreated and managed by L3. The backtrace 
looks like (this is our branch, may differ with upstream):

  ERROR neutron.agent.l3_agent Failed synchronizing routers
  TRACE neutron.agent.l3_agent Traceback (most recent call last):
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 1429, in _sync_routers_task
  TRACE neutron.agent.l3_agent self._process_routers(routers, 
all_routers=True)
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 1354, in _process_routers
  TRACE neutron.agent.l3_agent self._router_added(r['id'], r)
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 672, in _router_added
  TRACE neutron.agent.l3_agent self.process_ha_router_added(ri)
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 923, in 
process_ha_router_added
  TRACE neutron.agent.l3_agent vip_cidrs=[gw_ip_cidr])
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/l3_agent.py", line 897, in ha_network_added
  TRACE neutron.agent.l3_agent prefix=HA_DEV_PREFIX)
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/interface.py", line 194, in plug
  TRACE neutron.agent.l3_agent ns_dev.link.set_address(mac_address)
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 230, in set_address
  TRACE neutron.agent.l3_agent self._as_root('set', self.name, 'address', 
mac_address)
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 217, in _as_root
  TRACE neutron.agent.l3_agent kwargs.get('use_root_namespace', False))
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 70, in _as_root
  TRACE neutron.agent.l3_agent namespace)
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 81, in _execute
  TRACE neutron.agent.l3_agent root_helper=root_helper)
  TRACE neutron.agent.l3_agent   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 90, in execute
  TRACE neutron.agent.l3_agent raise RuntimeError(m)
  TRACE neutron.agent.l3_agent RuntimeError: 
  TRACE neutron.agent.l3_agent Command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'link', 
'set', 'ha-5bd08318-aa', 'address', 'fa:16:3e:f3:2b:6b']
  TRACE neutron.agent.l3_agent Exit code: 1
  TRACE neutron.agent.l3_agent Stdout: ''
  TRACE neutron.agent.l3_agent Stderr: 'Cannot find device "ha-5bd08318-aa"\n'
  TRACE neutron.agent.l3_agent 

The root cause is that ovs-vsctl "can perform any number of commands in
a single run, implemented as a single atomic transaction against the
database." and neutron currently use the following to create ovs-managed
device:

  ovs-vsctl -- --if-exists del-port qr-2f4c613d-b7 -- add-port br-int
qr-2f4c613d-b7 -- set Interface qr-2f4c613d-b7 type=internal -- set
Interface qr-2f4c613d-b7 external-ids:iface-id=2f4c613d-
b7f2-4d63-89c8-af2d48948d19 -- set Interface qr-2f4c613d-b7 external-ids
:iface-status=active -- set Interface qr-2f4c613d-b7 external-ids
:attached-mac=fa:16:3e:3c:4d:18

ovs can delete devices it manages even the device is in a deleted (lost)
namespace. But if del-port, add-port and set type=internal are put
together in one ovs-vsctl command, ovs will do nothing to the device and
the device is left as is.


In OVSInterfaceDriver.plug(self, network_id, port_id, device_name, 
mac_address,bridge=None, namespace=None, prefix=None):

self._ovs_add_port(bridge, tap_name, port_id, mac_address,
   internal=internal)

ns_dev.link.set_address(mac_address)

if self.conf.network_device_mtu:
ns_dev.link.set_mtu(self.conf.network_device_mtu)
if self.conf.ovs_use_veth:
root_dev.link.set_mtu(self.conf.network_device_mtu)

# Add an interface created 

[Yahoo-eng-team] [Bug 1346857] [NEW] HyperV driver does not implement Image cache ageing

2014-07-22 Thread Sagar Ratnakara Nikam
Public bug reported:

Nova.conf has the following options for image cache ageing
remove_unused_base_images
remove_unused_original_minimum_age_seconds.

If these conf values are enabled, older un-used images cached on the hypervisor 
hosts will be deleted by nova-compute.
The driver should implement the defn manage_image_cache
and this will be called by compute.manager._run_image_cache_manager_pass().

The HyperV driver does not implement manage_image_cache() which means
older unused images (VHD/VHDFx) will never get deleted and occupy
storage on the HyperV host, which otherwise can be used to spawn
instances.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346857

Title:
  HyperV driver does not implement Image cache ageing

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova.conf has the following options for image cache ageing
  remove_unused_base_images
  remove_unused_original_minimum_age_seconds.

  If these conf values are enabled, older un-used images cached on the 
hypervisor hosts will be deleted by nova-compute.
  The driver should implement the defn manage_image_cache
  and this will be called by compute.manager._run_image_cache_manager_pass().

  The HyperV driver does not implement manage_image_cache() which means
  older unused images (VHD/VHDFx) will never get deleted and occupy
  storage on the HyperV host, which otherwise can be used to spawn
  instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346829] Re: support disable_terminate for instance

2014-07-22 Thread lvdongbing
** Also affects: python-novaclient
   Importance: Undecided
   Status: New

** Changed in: python-novaclient
 Assignee: (unassigned) => lvdongbing (dbcocle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346829

Title:
  support disable_terminate for instance

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  New

Bug description:
  With 'disable_terminate' option, we can protect instance from  being 
accidentally terminated,
  but now, nova dosen't support to it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346829] [NEW] support disable_terminate for instance

2014-07-22 Thread lvdongbing
Public bug reported:

With 'disable_terminate' option, we can protect instance from  being 
accidentally terminated,
but now, nova dosen't support to it.

** Affects: nova
 Importance: Undecided
 Assignee: lvdongbing (dbcocle)
 Status: New

** Affects: python-novaclient
 Importance: Undecided
 Assignee: lvdongbing (dbcocle)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => lvdongbing (dbcocle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346829

Title:
  support disable_terminate for instance

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  New

Bug description:
  With 'disable_terminate' option, we can protect instance from  being 
accidentally terminated,
  but now, nova dosen't support to it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339273] Re: Sphinx documentation build failed in stable/havana: source_dir is not a directory

2014-07-22 Thread Alan Pevec
** Changed in: heat
   Status: New => Invalid

** Also affects: heat/havana
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1339273

Title:
  Sphinx documentation build failed in stable/havana: source_dir is not
  a directory

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in Glance havana series:
  New
Status in Orchestration API (Heat):
  Invalid
Status in heat havana series:
  New
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Committed

Bug description:
  Documentation is not building in stable/havana:

  $ tox -evenv -- python setup.py build_sphinx
  venv inst: /opt/stack/horizon/.tox/dist/horizon-2013.2.4.dev9.g19634d6.zip
  venv runtests: PYTHONHASHSEED='1422458638'
  venv runtests: commands[0] | python setup.py build_sphinx
  running build_sphinx
  error: 'source_dir' must be a directory name (got 
`/opt/stack/horizon/doc/source`)
  ERROR: InvocationError: '/opt/stack/horizon/.tox/venv/bin/python setup.py 
build_sphinx'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1339273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339273] Re: Sphinx documentation build failed in stable/havana: source_dir is not a directory

2014-07-22 Thread Alan Pevec
** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
   Status: New => Invalid

** Also affects: glance/havana
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1339273

Title:
  Sphinx documentation build failed in stable/havana: source_dir is not
  a directory

Status in OpenStack Image Registry and Delivery Service (Glance):
  Invalid
Status in Glance havana series:
  New
Status in Orchestration API (Heat):
  Invalid
Status in heat havana series:
  New
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Committed

Bug description:
  Documentation is not building in stable/havana:

  $ tox -evenv -- python setup.py build_sphinx
  venv inst: /opt/stack/horizon/.tox/dist/horizon-2013.2.4.dev9.g19634d6.zip
  venv runtests: PYTHONHASHSEED='1422458638'
  venv runtests: commands[0] | python setup.py build_sphinx
  running build_sphinx
  error: 'source_dir' must be a directory name (got 
`/opt/stack/horizon/doc/source`)
  ERROR: InvocationError: '/opt/stack/horizon/.tox/venv/bin/python setup.py 
build_sphinx'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1339273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346820] [NEW] Middeware auth_token fails with scoped federated saml token

2014-07-22 Thread Mahesh Sawaiker
Public bug reported:

Do the following steps
1) Set up keystone for federation.
2) Generated a unscoped federated token
3) Generate a scoped token using token in step 2
4) Set up nova/glance for using keystone v3 API.
5) Try an image list command using following request

Request

GET http://sp.machine:9292/v2/images
Headers:
Content-Type: application/json
Accept: application/json
X-Auth-Token: e92a49262a8d403db838d6494e4f9991

6) This will break the auth_token(middleware\auth_token.py) middleware
with key error at the following place

user = token['user']
user_domain_id = user['domain']['id']
user_domain_name = user['domain']['name']
in the function _build_user_headers.

This is because the token does not contain any domain id or name under
the user info, since federated tokens have no information about the user

This can be fixed, simply by putting an if condition around the
problematic code. I have tested this fix and then able to get image list
and server list using glance and nova rest apis.

Example
vim "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py"


 893 if 'domain' in user:
 894 user_domain_id = user['domain']['id']
 895 user_domain_name = user['domain']['name']


Following is the token information, not that there is no domain under users

{
  "token": {
"methods": [
  "saml2"
],
"roles": [
  {
"id": "aad3b40ebb3b442f8fe85e88b21f3b4c",
"name": "admin"
  }
],
"expires_at": "2014-07-22T10:15:05.367852Z",
"project": {
  "domain": {
"id": "default",
"name": "Default"
  },
  "id": "6e99b7d923bc437381fd1b2b4d890339",
  "name": "admin"
},
"catalog": [
  {
"endpoints": [
  {
"url": "https://127.0.0.1/keystone/main/v3";,
"interface": "internal",
"region": "regionOne",
"id": "f5dad391109542cba959d2e27c5fe3a2"
  },
  {
"url": "https://172.20.15.103:8443/keystone/main/v3";,
"interface": "public",
"region": "regionOne",
"id": "4f76970e4ab5497d9149d56d455499ac"
  },
  {
"url": "https://172.20.15.103:8443/keystone/admin/v3";,
"interface": "admin",
"region": "regionOne",
"id": "b85e76ca32f640c4a4d84068c71d3bf2"
  },
  {
"url": "https://172.20.15.103:8443/keystone/admin/v2.0";,
"interface": "admin",
"region": "regionOne",
"id": "1ae909491d754aeb8c8b8a5c5fa6ad47"
  },
  {
"url": "https://127.0.0.1/keystone/main/v2.0";,
"interface": "internal",
"region": "regionOne",
"id": "daf4ce3876d04285a106d86e0fea9bd1"
  },
  {
"url": "https://172.20.15.103:8443/keystone/main/v2.0";,
"interface": "public",
"region": "regionOne",
"id": "f763c80100954bc4805cf51b3dddb84b"
  }
],
"type": "identity",
"id": "0f79e21861a94fcd84b72cae3ebd79e5"
  },
  {
"endpoints": [
  {
"url": "http://172.20.15.103:9292";,
"interface": "admin",
"region": "RegionOne",
"id": "16ffa8cebadd4d239744ea168efcd109"
  },
  {
"url": "http://172.20.15.103:9292";,
"interface": "internal",
"region": "RegionOne",
"id": "944adaa070f44f21aa8a73fab15f07bb"
  },
  {
"url": "http://127.0.0.1:9292";,
"interface": "public",
"region": "RegionOne",
"id": "cd945f6a5ee8410bbfe8d3572e23ee5d"
  }
],
"type": "image",
"id": "fe5d67da897b4359810d95e2c591fe21"
  },
  {
"endpoints": [
  {
"url": 
"http://172.20.15.103:8776/v1/6e99b7d923bc437381fd1b2b4d890339";,
"interface": "admin",
"region": "RegionOne",
"id": "6d93d29279a6483783298eb67159b5c6"
  },
  {
"url": 
"http://172.20.15.103:8776/v1/6e99b7d923bc437381fd1b2b4d890339";,
"interface": "internal",
"region": "RegionOne",
"id": "9416222ad31a411294718b8fe4988daf"
  },
  {
"url": "http://127.0.0.1:8776/v1/6e99b7d923bc437381fd1b2b4d890339";,
"interface": "public",
"region": "RegionOne",
"id": "4d924ad3cb1a442a929536f90a1612b6"
  }
],
"type": "volume",
"id": "55ef917e57a540e9b0353f02dec22512"
  },
  {
"endpoints": [
  {
"url": "http://172.20.15.103:9696";,
"interface": "admin",
"region": "RegionOne",
"id": "5fe8a0a8f6624e2cae2e2a8556919c2f"
  },
  {
"url": "http:

[Yahoo-eng-team] [Bug 1336207] Re: [OSSA 2014-025] There is no quota for allowed address pair (CVE-2014-3555)

2014-07-22 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1336207

Title:
  [OSSA 2014-025] There is no quota for allowed address pair
  (CVE-2014-3555)

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  Fix Committed
Status in neutron icehouse series:
  Fix Committed
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Hi all,

  There is no quota for allowed address pair, user can create unlimited
  allowed address pair, in the backend, there will be at least 1
  iptables rule for one allowed address pair.  I tested if we use the
  attachment script to add about 10,000 allowed address pair. It will
  cost 30 sec to reflesh iptables rules in kernel...  I think that bad
  man can use this api to attack compute nodes. This will make the
  compute nodes crash or very slow only if we add enough allowed address
  pair rules...

  Thanks.
  Liping Mao

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1336207/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346816] [NEW] live migration failed when using shared instance path with QCOW2

2014-07-22 Thread Jay Lau
Public bug reported:


Currently, live migration will be failed when using shared instance path with 
QCOW2 because of 'instance_dir' was not defined.

File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
result = getattr(endpoint, method)(ctxt, **new_args)

  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 410, in 
decorated_function
return function(self, context, *args, **kwargs)

  File "/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped
payload)

  File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped
return f(self, context, *args, **kw)

  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 323, in 
decorated_function
kwargs['instance'], e, sys.exc_info())

  File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)

  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 311, in 
decorated_function
return function(self, context, *args, **kwargs)

  File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4661, 
in pre_live_migration
migrate_data)

  File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 
4739, in pre_live_migration
instance_dir, disk_info)

UnboundLocalError: local variable 'instance_dir' referenced before assignment
 to caller

** Affects: nova
 Importance: Undecided
 Assignee: Jay Lau (jay-lau-513)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346816

Title:
  live migration failed when using shared instance path with QCOW2

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  Currently, live migration will be failed when using shared instance path with 
QCOW2 because of 'instance_dir' was not defined.

  File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", 
line 123, in _do_dispatch
  result = getattr(endpoint, method)(ctxt, **new_args)

File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 410, 
in decorated_function
  return function(self, context, *args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in 
wrapped
  payload)

File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)

File "/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in 
wrapped
  return f(self, context, *args, **kw)

File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 323, 
in decorated_function
  kwargs['instance'], e, sys.exc_info())

File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", 
line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)

File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 311, 
in decorated_function
  return function(self, context, *args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 4661, 
in pre_live_migration
  migrate_data)

File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 
4739, in pre_live_migration
  instance_dir, disk_info)

  UnboundLocalError: local variable 'instance_dir' referenced before assignment
   to caller

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1346816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346782] Re: Dashboard does not gracefully handle a single broken region

2014-07-22 Thread Robert van Leeuwen
Duplicate https://bugs.launchpad.net/horizon/+bug/1323811

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1346782

Title:
  Dashboard does not gracefully handle a single broken region

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When the API of a single region is broken the whole dashboard will break.
  This will interrupt the endusers for all regions while only a single region 
is broken.

  How to reproduce:
  Add a second region to keystone for a service, e.g. nova
  This endpoint is non-functional.
  Logging in will no longer work

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1346782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346782] [NEW] Dashboard does not gracefully handle a single broken region

2014-07-22 Thread Robert van Leeuwen
Public bug reported:

When the API of a single region is broken the whole dashboard will break.
This will interrupt the endusers for all regions while only a single region is 
broken.

How to reproduce:
Add a second region to keystone for a service, e.g. nova
This endpoint is non-functional.
Logging in will no longer work

** Affects: horizon
 Importance: Undecided
 Status: Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1346782

Title:
  Dashboard does not gracefully handle a single broken region

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When the API of a single region is broken the whole dashboard will break.
  This will interrupt the endusers for all regions while only a single region 
is broken.

  How to reproduce:
  Add a second region to keystone for a service, e.g. nova
  This endpoint is non-functional.
  Logging in will no longer work

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1346782/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346778] [NEW] neutron policy can't match neutron keystone user

2014-07-22 Thread Kevin Benton
Public bug reported:

The policy.json keywords have no way to match the username of the
neutron keystone credentials. This is relevant because neutron is
overprivileged when it has an admin account. To solve this, a deployer
can give it an account with the service role instead of the admin role.
However, for this to work the deployer has to then modify the is_admin
rule in policy.json to hardcode in the user_name or user_id used by
neutron so it can promote that account to admin-level operations inside
of neutron.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1346778

Title:
  neutron policy can't match neutron keystone user

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The policy.json keywords have no way to match the username of the
  neutron keystone credentials. This is relevant because neutron is
  overprivileged when it has an admin account. To solve this, a deployer
  can give it an account with the service role instead of the admin
  role. However, for this to work the deployer has to then modify the
  is_admin rule in policy.json to hardcode in the user_name or user_id
  used by neutron so it can promote that account to admin-level
  operations inside of neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1346778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp