[Yahoo-eng-team] [Bug 1408859] [NEW] Scheduler didn't release already allocated HostState resources after multiple creating instances fails

2015-01-08 Thread Rui Chen
Public bug reported:

We multiple-create 3 instances, but the host resource is only enough for 1 
instance,
nova-scheduler consume the resource of selected host for the first instance in 
select_destinations.
After the multiple creating fails, we try to boot 1 instance with same flavor, 
the host have
enough resource to boot it, but nova-scheduler raise 'No Valid Host'. And more 
worse is that
host resource tracker only update compute node into DB when the host resource 
have changed,so ComputeNode's update time in DB will be less than the update 
time in scheduler cache,the scheduler cache can't be updated. In this case, the 
host will not be selected forever.
We need to release the host resource when multiple creating instance is failed.

** Affects: nova
 Importance: Undecided
 Assignee: Rui Chen (kiwik-chenrui)
 Status: New

** Description changed:

  We multiple-create 3 instances, but the host resource is only enough for 1 
instance,
  nova-scheduler consume the resource of selected host for the first instance 
in select_destinations.
- After the multiple creating fails, we try to boot 1 instance with same 
flavor, the host have 
- enough resource to boot it, but nova-scheduler raise 'No Valid Host'. And 
more worse is that 
- host resource tracker only update compute node into DB when the host resource 
have changed, 
- so ComputeNode's update time in DB will be less than the update time in 
scheduler cache, 
- the scheduler cache can't be updated. In this case, the host will not be 
selected forever.
+ After the multiple creating fails, we try to boot 1 instance with same 
flavor, the host have
+ enough resource to boot it, but nova-scheduler raise 'No Valid Host'. And 
more worse is that
+ host resource tracker only update compute node into DB when the host resource 
have changed,so ComputeNode's update time in DB will be less than the update 
time in scheduler cache,the scheduler cache can't be updated. In this case, the 
host will not be selected forever.
  We need to release the host resource when multiple creating instance is 
failed.

** Changed in: nova
 Assignee: (unassigned) = Rui Chen (kiwik-chenrui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408859

Title:
  Scheduler didn't release already allocated HostState resources after
  multiple creating instances fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  We multiple-create 3 instances, but the host resource is only enough for 1 
instance,
  nova-scheduler consume the resource of selected host for the first instance 
in select_destinations.
  After the multiple creating fails, we try to boot 1 instance with same 
flavor, the host have
  enough resource to boot it, but nova-scheduler raise 'No Valid Host'. And 
more worse is that
  host resource tracker only update compute node into DB when the host resource 
have changed,so ComputeNode's update time in DB will be less than the update 
time in scheduler cache,the scheduler cache can't be updated. In this case, the 
host will not be selected forever.
  We need to release the host resource when multiple creating instance is 
failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408862] [NEW] type filter of policy list doesn't take effect

2015-01-08 Thread wanghong
Public bug reported:

According to the code and the comment of code, it seams that we don't
want to support type filter for policy list, refer to:
https://github.com/openstack/keystone/blob/master/keystone/policy/core.py#L64

However, the controller defines the type filter:
https://github.com/openstack/keystone/blob/master/keystone/policy/controllers.py#L34
and the doc says we support this too: https://github.com/openstack
/keystone-specs/blob/master/api/v3/identity-api-v3.rst#list-policies

I think we should clean up the controller and make the doc match code.

** Affects: keystone
 Importance: Undecided
 Assignee: wanghong (w-wanghong)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = wanghong (w-wanghong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1408862

Title:
  type filter of policy list doesn't take effect

Status in OpenStack Identity (Keystone):
  New

Bug description:
  According to the code and the comment of code, it seams that we don't
  want to support type filter for policy list, refer to:
  https://github.com/openstack/keystone/blob/master/keystone/policy/core.py#L64

  However, the controller defines the type filter:
  
https://github.com/openstack/keystone/blob/master/keystone/policy/controllers.py#L34
  and the doc says we support this too: https://github.com/openstack
  /keystone-specs/blob/master/api/v3/identity-api-v3.rst#list-policies

  I think we should clean up the controller and make the doc match code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1408862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408865] [NEW] Ignoring EndpointNotFound: The service catalog is empty error when init_host

2015-01-08 Thread wanghao
Public bug reported:

the scenario:

1. create a vm using bootable volume.

2. delete this vm

3. restart service nova-compute when vm's task state is deleting.

When nova-compute is up, vm became deleted successful, but the bootable
volume is still in-use state and can't delete it using cinder delete
volume.

The error point is when nova-compute is up, init_host will go to
delete the vm whose task state is deleting, but the context using is
got from nova.context.get_admin_context() function. There is no
auth_token.  When call self.volume_api.terminate_connection(context,
bdm.volume_id, connector) in deleting vm process, it will throw
Ignoring EndpointNotFound: The service catalog is empty error and
can't detach the bootable volume.

** Affects: nova
 Importance: Undecided
 Assignee: wanghao (wanghao749)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = wanghao (wanghao749)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408865

Title:
  Ignoring EndpointNotFound: The service catalog is empty error when
  init_host

Status in OpenStack Compute (Nova):
  New

Bug description:
  the scenario:

  1. create a vm using bootable volume.

  2. delete this vm

  3. restart service nova-compute when vm's task state is deleting.

  When nova-compute is up, vm became deleted successful, but the
  bootable volume is still in-use state and can't delete it using cinder
  delete volume.

  The error point is when nova-compute is up, init_host will go to
  delete the vm whose task state is deleting, but the context using is
  got from nova.context.get_admin_context() function. There is no
  auth_token.  When call self.volume_api.terminate_connection(context,
  bdm.volume_id, connector) in deleting vm process, it will throw
  Ignoring EndpointNotFound: The service catalog is empty error and
  can't detach the bootable volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408865/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408845] [NEW] Disabling user in ldap brakes user-list for project

2015-01-08 Thread Oleksii Aleksieiev
Public bug reported:

Disabling user in ldap brakes user-list for project.

Step to reproduce.

* create a testuser user in ldap backend for keystone.
* check that user exist in user list.
* assign some role to this user in any test project.
* check that this user appear in keystone user-list --tenat_it=testtenantid
* disable this user in ldap or remove it from the group.
* the user will disappear from user list but the command keystone user-list 
--tenat_id=testtenantid will return User testuser not found. error in api 
and in keystone error log.

The workaround is to remove  role for user from user_project_metadata
table in keystone database.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1408845

Title:
  Disabling user in ldap brakes user-list for project

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Disabling user in ldap brakes user-list for project.

  Step to reproduce.

  * create a testuser user in ldap backend for keystone.
  * check that user exist in user list.
  * assign some role to this user in any test project.
  * check that this user appear in keystone user-list --tenat_it=testtenantid
  * disable this user in ldap or remove it from the group.
  * the user will disappear from user list but the command keystone user-list 
--tenat_id=testtenantid will return User testuser not found. error in api 
and in keystone error log.

  The workaround is to remove  role for user from user_project_metadata
  table in keystone database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1408845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408862] Re: type filter of policy list doesn't take effect

2015-01-08 Thread wanghong
The filtering by 'type' is not being done by the manager/driver. It's
being done at Controller level, at PolicyV3.wrap_collection(..)

** Changed in: keystone
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1408862

Title:
  type filter of policy list doesn't take effect

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  According to the code and the comment of code, it seams that we don't
  want to support type filter for policy list, refer to:
  https://github.com/openstack/keystone/blob/master/keystone/policy/core.py#L64

  However, the controller defines the type filter:
  
https://github.com/openstack/keystone/blob/master/keystone/policy/controllers.py#L34
  and the doc says we support this too: https://github.com/openstack
  /keystone-specs/blob/master/api/v3/identity-api-v3.rst#list-policies

  I think we should clean up the controller and make the doc match code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1408862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408873] [NEW] references to name in pluggables misleading

2015-01-08 Thread David Lyle
Public bug reported:

The term name is used incorrectly and is misleading in the pluggable
extensions settings documentation and the enabled files. The value that
needs to be specified is the slug not the name. This correction will aid
in developers and deployers using the correct value.

Change reference 'name' to 'slug'.

** Affects: horizon
 Importance: Low
 Assignee: David Lyle (david-lyle)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1408873

Title:
  references to name in pluggables misleading

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The term name is used incorrectly and is misleading in the pluggable
  extensions settings documentation and the enabled files. The value
  that needs to be specified is the slug not the name. This correction
  will aid in developers and deployers using the correct value.

  Change reference 'name' to 'slug'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1408873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408908] [NEW] Nova API secgroup-list doesn't support all_tenants

2015-01-08 Thread sireesha chintada
Public bug reported:

When nova secgroup-list --all-tenants is given with sourcing admin, it is 
returning secgroups of  only admin tenant.
It should actually list secgroups of all the tenants present.
This bug is reproduced in stable/icehouse and stable/juno

Steps to reproduce this bug:

ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ source openrc rem REMEMBER

ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ nova secgroup-list
+--+-+-+
| Id   | Name| Description |
+--+-+-+
| 4a00a598-e125-41c3-80ab-d9a6055d9a21 | default | default |
+--+-+-+


ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ source openrc admin admin


ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ nova secgroup-list
+--+-+-+
| Id   | Name| Description |
+--+-+-+
| 51df59f5-1a05-4e99-9cbd-867114017e65 | default | default |
+--+-+-+
ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ nova secgroup-list --all-tenants
+--+-+-+--+
| Id   | Name| Description | Tenant_ID  
  |
+--+-+-+--+
| 51df59f5-1a05-4e99-9cbd-867114017e65 | default | default | 
d6c7a334353c4613900bfe822ac93d0e |
+--+-+-+--+

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408908

Title:
  Nova API secgroup-list doesn't support all_tenants

Status in OpenStack Compute (Nova):
  New

Bug description:
  When nova secgroup-list --all-tenants is given with sourcing admin, it is 
returning secgroups of  only admin tenant.
  It should actually list secgroups of all the tenants present.
  This bug is reproduced in stable/icehouse and stable/juno

  Steps to reproduce this bug:

  ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ source openrc rem REMEMBER

  ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ nova secgroup-list
  +--+-+-+
  | Id   | Name| Description |
  +--+-+-+
  | 4a00a598-e125-41c3-80ab-d9a6055d9a21 | default | default |
  +--+-+-+

  
  ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ source openrc admin admin

  
  ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ nova secgroup-list
  +--+-+-+
  | Id   | Name| Description |
  +--+-+-+
  | 51df59f5-1a05-4e99-9cbd-867114017e65 | default | default |
  +--+-+-+
  ubuntu@ubuntu-ThinkCentre-M92p:~/devstack$ nova secgroup-list --all-tenants
  
+--+-+-+--+
  | Id   | Name| Description | Tenant_ID
|
  
+--+-+-+--+
  | 51df59f5-1a05-4e99-9cbd-867114017e65 | default | default | 
d6c7a334353c4613900bfe822ac93d0e |
  
+--+-+-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408909] [NEW] Release_dhcp_port doesn't work when remove network from dhcp agent

2015-01-08 Thread KaiLin
Public bug reported:

In the bug:https://bugs.launchpad.net/neutron/+bug/1288923
It changes that reserve the dhcp port to be reused  after 
remove-network-from-agent.
The code is: 
  port['device_id'] = constants.DEVICE_ID_RESERVED_DHCP_PORT
self.update_port(context, port['id'], dict(port=port))
Then the code will be executed to release_dhcp_port. The above port device_id 
in the database has been modified, so it can't release the dhcp port.

def delete_ports_by_device_id(self, context, device_id, network_id=None):
query = (context.session.query(models_v2.Port.id)
 .enable_eagerloads(False)
 .filter(models_v2.Port.device_id == device_id))
if network_id:
query = query.filter(models_v2.Port.network_id == network_id)
port_ids = [p[0] for p in query]
for port_id in port_ids:
try:
self.delete_port(context, port_id)

The port_ids will be none,so the release_dhcp_port doesn't work.It seems 
inappropriate.
I think if we want to reserve the dhcp port,we do not need release_dhcp_port, 
so delete it.
Or we have other ways of handling here.

** Affects: neutron
 Importance: Undecided
 Assignee: KaiLin (linkai3)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = KaiLin (linkai3)

** Summary changed:

- Release_dhcp_port doesn't work when When remove network from dhcp agent
+ Release_dhcp_port doesn't work when remove network from dhcp agent

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1408909

Title:
  Release_dhcp_port doesn't work when remove network from dhcp agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In the bug:https://bugs.launchpad.net/neutron/+bug/1288923
  It changes that reserve the dhcp port to be reused  after 
remove-network-from-agent.
  The code is: 
port['device_id'] = constants.DEVICE_ID_RESERVED_DHCP_PORT
  self.update_port(context, port['id'], dict(port=port))
  Then the code will be executed to release_dhcp_port. The above port device_id 
in the database has been modified, so it can't release the dhcp port.

  def delete_ports_by_device_id(self, context, device_id, network_id=None):
  query = (context.session.query(models_v2.Port.id)
   .enable_eagerloads(False)
   .filter(models_v2.Port.device_id == device_id))
  if network_id:
  query = query.filter(models_v2.Port.network_id == network_id)
  port_ids = [p[0] for p in query]
  for port_id in port_ids:
  try:
  self.delete_port(context, port_id)

  The port_ids will be none,so the release_dhcp_port doesn't work.It seems 
inappropriate.
  I think if we want to reserve the dhcp port,we do not need 
release_dhcp_port, so delete it.
  Or we have other ways of handling here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1408909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1065531] Re: lockutils - remove lock dir creation and cleanup

2015-01-08 Thread Ben Nemec
I believe everyone is on oslo.concurrency now, so this should no longer
be an issue anywhere.

** Changed in: cinder
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1065531

Title:
  lockutils - remove lock dir creation and cleanup

Status in Cinder:
  Fix Released
Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in The Oslo library incubator:
  Fix Released

Bug description:
  See https://review.openstack.org/14139

  This:

  if not local_lock_path:
  cleanup_dir = True
  local_lock_path = tempfile.mkdtemp()

  if not os.path.exists(local_lock_path):
  cleanup_dir = True
  ensure_tree(local_lock_path)
  ...
  finally:
  # NOTE(vish): This removes the tempdir if we needed
  # to create one. This is used to cleanup
  # the locks left behind by unit tests.
  if cleanup_dir:
  shutil.rmtree(local_lock_path)

  
  Why are we deleting the lock dir here? Does that even work? i.e. what if 
someone concurrently tries to take the lock, re-creates the dir and lock a new 
file?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1065531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407105] Re: Password Change Doesn't Affirmatively Invalidate Sessions

2015-01-08 Thread Jeremy Stanley
Given the overwhelming consensus that this isn't exploitable, I've
switched the bug to public and marked the security advisory task won't
fix so this can just be worked as a normal bug/hardening opportunity.

** Information type changed from Private Security to Public

** Changed in: ossa
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1407105

Title:
  Password Change Doesn't Affirmatively Invalidate Sessions

Status in OpenStack Dashboard (Horizon):
  Incomplete
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  This issue is being treated as a potential security risk under
  embargo. Please do not make any public mention of embargoed (private)
  security vulnerabilities before their coordinated publication by the
  OpenStack Vulnerability Management Team in the form of an official
  OpenStack Security Advisory. This includes discussion of the bug or
  associated fixes in public forums such as mailing lists, code review
  systems and bug trackers. Please also avoid private disclosure to
  other individuals not already approved for access to this information,
  and provide this same reminder to those who are made aware of the
  issue prior to publication. All discussion should remain confined to
  this private bug report, and any proposed fixes should be added as to
  the bug as attachments.

  The password change dialog at /horizon/settings/password/ contains the
  following code:

  code
  if user_is_editable:
  try:
  api.keystone.user_update_own_password(request,
  data['current_password'],
  data['new_password'])
  response = http.HttpResponseRedirect(settings.LOGOUT_URL)
  msg = _(Password changed. Please log in again to continue.)
  utils.add_logout_reason(request, response, msg)
  return response
  except Exception:
  exceptions.handle(request,
    _('Unable to change password.'))
  return False
  else:
  messages.error(request, _('Changing password is not supported.'))
  return False
  /code

  There are at least two security concerns here:
  1) Logout is done by means of an HTTP redirect.  Let's say Eve, as MitM, gets 
ahold of Alice's token somehow.  Alice is worried this may have happened, so 
she changes her password.  If Eve suspects that the request is a 
password-change request (which is the most Eve can do, because we're running 
over HTTPS, right?  Right!?), then it's a simple matter to block the redirect 
from ever reaching the client, or the redirect request from hitting the server. 
 From Alice's PoV, something weird happened, but her new password works, so 
she's not bothered.  Meanwhile, Alice's old login ticket continues to work.
  2) Part of the purpose of changing a password is generally to block those who 
might already have the password from continuing to use it.  A password change 
should trigger (insofar as is possible) a purging of all active logins/tokens 
for that user.  That does not happen here.

  Frankly, I'm not the least bit sure if I've thought of the worst-case
  scenario(s) for point #1.  It just strikes me as very strange not to
  aggressively/proactively kill the ticket/token(s), instead relying on
  the client to do so.  Feel free to apply minds smarter and more
  devious than my own!

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1407105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408625] [NEW] metadata server errors out with a request missing X-Instance-ID-Signature header

2015-01-08 Thread Tomoe Sugihara
Public bug reported:

When metadata server (nova-api:8775 by default) gets a request without X
-Instance-ID-Signature header, the server errors out with the following
stacktrace:


2015-01-08 18:10:51.955 INFO nova.metadata.wsgi.server [-] 127.0.0.1 GET / 
HTTP/1.1 status: 200 len: 215 time: 0.0011151
2015-01-08 18:10:55.354 ERROR nova.api.ec2 [-] FaultWrapper: object of type 
'NoneType' has no len()
2015-01-08 18:10:55.354 TRACE nova.api.ec2 Traceback (most recent call last):
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/ec2/__init__.py, line 90, in __call__
2015-01-08 18:10:55.354 TRACE nova.api.ec2 return 
req.get_response(self.application)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
build/bdist.linux-x86_64/egg/webob/request.py, line 1320, in send
2015-01-08 18:10:55.354 TRACE nova.api.ec2 application, 
catch_exc_info=False)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
build/bdist.linux-x86_64/egg/webob/request.py, line 1284, in call_application
2015-01-08 18:10:55.354 TRACE nova.api.ec2 app_iter = 
application(self.environ, start_response)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
build/bdist.linux-x86_64/egg/webob/dec.py, line 130, in __call__
2015-01-08 18:10:55.354 TRACE nova.api.ec2 resp = self.call_func(req, 
*args, **self.kwargs)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
build/bdist.linux-x86_64/egg/webob/dec.py, line 195, in call_func
2015-01-08 18:10:55.354 TRACE nova.api.ec2 return self.func(req, *args, 
**kwargs)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/ec2/__init__.py, line 102, in __call__
2015-01-08 18:10:55.354 TRACE nova.api.ec2 rv = 
req.get_response(self.application)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
build/bdist.linux-x86_64/egg/webob/request.py, line 1320, in send
2015-01-08 18:10:55.354 TRACE nova.api.ec2 application, 
catch_exc_info=False)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
build/bdist.linux-x86_64/egg/webob/request.py, line 1284, in call_application
2015-01-08 18:10:55.354 TRACE nova.api.ec2 app_iter = 
application(self.environ, start_response)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
build/bdist.linux-x86_64/egg/webob/dec.py, line 130, in __call__
2015-01-08 18:10:55.354 TRACE nova.api.ec2 resp = self.call_func(req, 
*args, **self.kwargs)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
build/bdist.linux-x86_64/egg/webob/dec.py, line 195, in call_func
2015-01-08 18:10:55.354 TRACE nova.api.ec2 return self.func(req, *args, 
**kwargs)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/metadata/handler.py, line 110, in __call__
2015-01-08 18:10:55.354 TRACE nova.api.ec2 meta_data = 
self._handle_instance_id_request(req)
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/metadata/handler.py, line 187, in 
_handle_instance_id_request
2015-01-08 18:10:55.354 TRACE nova.api.ec2 if not 
utils.constant_time_compare(expected_signature, signature):
2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/utils.py, line 1140, in constant_time_compare
2015-01-08 18:10:55.354 TRACE nova.api.ec2 if len(first) != len(second):
2015-01-08 18:10:55.354 TRACE nova.api.ec2 TypeError: object of type 'NoneType' 
has no len()
2015-01-08 18:10:55.354 TRACE nova.api.ec2 


It'd be safer to validate against non-existence.

** Affects: nova
 Importance: Undecided
 Assignee: Tomoe Sugihara (tomoe)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408625

Title:
  metadata server errors out with a request missing X-Instance-ID-
  Signature header

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When metadata server (nova-api:8775 by default) gets a request without
  X-Instance-ID-Signature header, the server errors out with the
  following stacktrace:

  
  2015-01-08 18:10:51.955 INFO nova.metadata.wsgi.server [-] 127.0.0.1 GET / 
HTTP/1.1 status: 200 len: 215 time: 0.0011151
  2015-01-08 18:10:55.354 ERROR nova.api.ec2 [-] FaultWrapper: object of type 
'NoneType' has no len()
  2015-01-08 18:10:55.354 TRACE nova.api.ec2 Traceback (most recent call last):
  2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
/opt/stack/nova/nova/api/ec2/__init__.py, line 90, in __call__
  2015-01-08 18:10:55.354 TRACE nova.api.ec2 return 
req.get_response(self.application)
  2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
build/bdist.linux-x86_64/egg/webob/request.py, line 1320, in send
  2015-01-08 18:10:55.354 TRACE nova.api.ec2 application, 
catch_exc_info=False)
  2015-01-08 18:10:55.354 TRACE nova.api.ec2   File 
build/bdist.linux-x86_64/egg/webob/request.py, line 1284, in call_application
  2015-01-08 18:10:55.354 TRACE nova.api.ec2 app_iter = 
application(self.environ, 

[Yahoo-eng-team] [Bug 1408636] [NEW] Incompatibility between nova 2014.2 and 2014.2.1-1

2015-01-08 Thread Michal
Public bug reported:

The problem I experienced and a solution is described under the
following link:

https://ask.openstack.org/en/question/57296/juno-centos-7
-buildabortexception-build-of-instance-aborted-failed-to-allocate-the-
networks-not-rescheduling/

It was also reported here:

https://ask.openstack.org/en/question/56759/cant-launch-instance-in-
secondary-compute-node/

Basically in my case I had to forcefully upgrade 2014.2.1-1 to 2014.2-2.
But it seems that repos for Ubuntu and Centos are not updated properly.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408636

Title:
  Incompatibility between nova 2014.2 and 2014.2.1-1

Status in OpenStack Compute (Nova):
  New

Bug description:
  The problem I experienced and a solution is described under the
  following link:

  https://ask.openstack.org/en/question/57296/juno-centos-7
  -buildabortexception-build-of-instance-aborted-failed-to-allocate-the-
  networks-not-rescheduling/

  It was also reported here:

  https://ask.openstack.org/en/question/56759/cant-launch-instance-in-
  secondary-compute-node/

  Basically in my case I had to forcefully upgrade 2014.2.1-1 to
  2014.2-2. But it seems that repos for Ubuntu and Centos are not
  updated properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408640] [NEW] logout dropdown was blocked by message tip

2015-01-08 Thread LIU Yulong
Public bug reported:

Logout dropdown was blocked by message tip,
if you close one message tip, the next will come top again : ( 

** Affects: horizon
 Importance: Undecided
 Assignee: LIU Yulong (dragon889)
 Status: In Progress

** Attachment added: message block the logout dropdown.png
   
https://bugs.launchpad.net/bugs/1408640/+attachment/4293995/+files/message%20block%20the%20logout%20dropdown.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1408640

Title:
  logout dropdown was blocked by message tip

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Logout dropdown was blocked by message tip,
  if you close one message tip, the next will come top again : ( 

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1408640/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408591] [NEW] AttributeError: 'Instance' object has no attribute 'get_flavor' when call compute_api.update

2015-01-08 Thread Jerry Cai
Public bug reported:

In nova/notifications.py(370)info_from_instance():
The AttributeError: 'Instance' object has no attribute 'get_flavor' throws on:
instance_type = instance.get_flavor()

The stacktrace is:
- self.compute_api.update(context, local_instance, **base_options)
  /usr/lib/python2.7/site-packages/nova/compute/api.py(235)wrapped()
- return func(self, context, target, *args, **kwargs)
  /usr/lib/python2.7/site-packages/nova/compute/api.py(1501)update()
- refs = self._update(context, instance, **kwargs)
  /usr/lib/python2.7/site-packages/nova/compute/api.py(1510)_update()
- instance_ref, service=api)
  /usr/lib/python2.7/site-packages/nova/notifications.py(146)send_update()
- old_display_name=old_display_name)
  
/usr/lib/python2.7/site-packages/nova/notifications.py(226)_send_instance_update_notification()
- payload = info_from_instance(context, instance, None, None)
 /usr/lib/python2.7/site-packages/nova/notifications.py(370)info_from_instance()
- instance_type = instance.get_flavor()

I tried pass into the db instance and nova object instance, I believe
this should be the defect. Please look into it.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408591

Title:
  AttributeError: 'Instance' object has no attribute 'get_flavor' when
  call compute_api.update

Status in OpenStack Compute (Nova):
  New

Bug description:
  In nova/notifications.py(370)info_from_instance():
  The AttributeError: 'Instance' object has no attribute 'get_flavor' throws 
on:
  instance_type = instance.get_flavor()

  The stacktrace is:
  - self.compute_api.update(context, local_instance, **base_options)
/usr/lib/python2.7/site-packages/nova/compute/api.py(235)wrapped()
  - return func(self, context, target, *args, **kwargs)
/usr/lib/python2.7/site-packages/nova/compute/api.py(1501)update()
  - refs = self._update(context, instance, **kwargs)
/usr/lib/python2.7/site-packages/nova/compute/api.py(1510)_update()
  - instance_ref, service=api)
/usr/lib/python2.7/site-packages/nova/notifications.py(146)send_update()
  - old_display_name=old_display_name)

/usr/lib/python2.7/site-packages/nova/notifications.py(226)_send_instance_update_notification()
  - payload = info_from_instance(context, instance, None, None)
   
/usr/lib/python2.7/site-packages/nova/notifications.py(370)info_from_instance()
  - instance_type = instance.get_flavor()

  I tried pass into the db instance and nova object instance, I believe
  this should be the defect. Please look into it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408603] [NEW] OVS Agent creates a tunnel when local_ip is wrong

2015-01-08 Thread Itzik Brown
Public bug reported:


When specifying a wrong local_ip with tunnel type 'vxlan'  which doesn't belong 
to the host a tunnel is created where local_ip is the wrong one and 
the remote_ip is the right one.
There should be a sanity check to check that the IP address in local_ip belongs 
to the host.

Version

RHEL7.0
openstack-neutron-2014.2.1-5.el7ost

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1408603

Title:
  OVS Agent creates a tunnel when local_ip is wrong

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  When specifying a wrong local_ip with tunnel type 'vxlan'  which doesn't 
belong to the host a tunnel is created where local_ip is the wrong one and 
  the remote_ip is the right one.
  There should be a sanity check to check that the IP address in local_ip 
belongs to the host.

  Version
  
  RHEL7.0
  openstack-neutron-2014.2.1-5.el7ost

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1408603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408612] [NEW] HTTP Keep-alive connections prevent keystone from terminating

2015-01-08 Thread Mark Goddard
Public bug reported:

Seen on RDO Juno, running on CentOS 7.

Steps to reproduce:

- Set admin_workers=1 and public_workers=1 in /etc/keystone/keystone.conf
- Start the keystone service: `systemctl start openstack-keystone`
- Start a 'persistent' TCP connection to keystone: `telnet localhost 5000 `
- Stop the service: `systemctl stop openstack-keystone`

The final systemctl invokation will hang, as the process fails to
terminate. Eventually it will time out and forcefully kill the process.

Output of `systemctl status openstack-keystone`:

Jan 08 05:07:38 mgoddard systemd[1]: openstack-keystone.service stopping timed 
out. Killing.
Jan 08 05:07:38 mgoddard systemd[1]: openstack-keystone.service: main process 
exited, code=killed, status=9/KILL
Jan 08 05:07:38 mgoddard systemd[1]: Stopped OpenStack Identity Service.
Jan 08 05:07:38 mgoddard systemd[1]: Unit openstack-keystone.service entered 
failed state.

The use of telnet here is just to demonstrate the problem. The same
effect can be seen when OpenStack services maintain persistent
connections to keystone.

With multiple worker processes, the issue is not observed. It is
believed that as systemd is able to kill the parent process, the child
process holding the persistent connection is killed by systemd, so the
issue is not observed (although this is speculation).

When this issue was first observed, multiple workers were used and
systemd was not in use. Rather, we used init scripts in /etc/init.d/. In
this case the result was worse, as the `service openstack-keystone stop`
command would exit successfully, but fail to terminate any child
processes with persistent HTTP connections open. Subsequent attempts to
start the keystone service would fail due to the lingering stale
process.


During the investigation of the issue, some root cause analysis was performed 
which will be presented below.

- When a keystone process receives SIGTERM, it ends up waiting for all 
greenthreads in the greenpool to finish at 
https://github.com/eventlet/eventlet/blob/8d2474197de4827a7bca9c33e71a82573b6fc721/eventlet/wsgi.py#L267.
- Persistent connections, when between HTTP requests, end up waiting at 
https://github.com/eventlet/eventlet/blob/8d2474197de4827a7bca9c33e71a82573b6fc721/eventlet/wsgi.py#L267
 for the next request. The greenthread will not terminate until the connection 
is closed.

The process will therefore not terminate until all connections have
closed. It seems sensible to me to finish servicing individual requests
for a graceful shutdown, but there needs to be a mechanism to close
persistent connections between requests.

This issue could (should?) be solved in eventlet.wsgi by a mechanism to
trigger disconnection of persistent connections between requests when
the server is stopped.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1408612

Title:
  HTTP Keep-alive connections prevent keystone from terminating

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Seen on RDO Juno, running on CentOS 7.

  Steps to reproduce:

  - Set admin_workers=1 and public_workers=1 in /etc/keystone/keystone.conf
  - Start the keystone service: `systemctl start openstack-keystone`
  - Start a 'persistent' TCP connection to keystone: `telnet localhost 5000 `
  - Stop the service: `systemctl stop openstack-keystone`

  The final systemctl invokation will hang, as the process fails to
  terminate. Eventually it will time out and forcefully kill the
  process.

  Output of `systemctl status openstack-keystone`:

  Jan 08 05:07:38 mgoddard systemd[1]: openstack-keystone.service stopping 
timed out. Killing.
  Jan 08 05:07:38 mgoddard systemd[1]: openstack-keystone.service: main process 
exited, code=killed, status=9/KILL
  Jan 08 05:07:38 mgoddard systemd[1]: Stopped OpenStack Identity Service.
  Jan 08 05:07:38 mgoddard systemd[1]: Unit openstack-keystone.service entered 
failed state.

  The use of telnet here is just to demonstrate the problem. The same
  effect can be seen when OpenStack services maintain persistent
  connections to keystone.

  With multiple worker processes, the issue is not observed. It is
  believed that as systemd is able to kill the parent process, the child
  process holding the persistent connection is killed by systemd, so the
  issue is not observed (although this is speculation).

  When this issue was first observed, multiple workers were used and
  systemd was not in use. Rather, we used init scripts in /etc/init.d/.
  In this case the result was worse, as the `service openstack-keystone
  stop` command would exit successfully, but fail to terminate any child
  processes with persistent HTTP connections open. Subsequent attempts
  to start the keystone service would fail due to the lingering stale
  process.

  
  During the investigation of the issue, 

[Yahoo-eng-team] [Bug 1408613] [NEW] VMware: snapshoting a VM with ephemeral disk attached creates wrong image

2015-01-08 Thread Gary Kotton
Public bug reported:

Booting a image that was snapshoted from a VM with an ephemarl disk
fails. This is due to the fact that the wrong root disk is uploaded!

** Affects: nova
 Importance: Critical
 Assignee: Gary Kotton (garyk)
 Status: In Progress

** Changed in: nova
   Importance: Undecided = Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408613

Title:
  VMware: snapshoting a VM with ephemeral disk attached creates wrong
  image

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Booting a image that was snapshoted from a VM with an ephemarl disk
  fails. This is due to the fact that the wrong root disk is uploaded!

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370037] Re: bulk termination of instances

2015-01-08 Thread Matthias Runge
The issue is here: bulk termination stops, when an instance is stopped
e.g via command line manually after displaying the list of instances.

** Changed in: horizon
   Status: Expired = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370037

Title:
  bulk termination of instances

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  while terminating a bulk of instances from the horizon, the first
  instance termination failed, the other instances all stayed in
  available status.

  Steps to Reproduce:
  1. Delete a bulk of instances
  2. The first deletion process should fail  the instance status should turn 
to Error

  Actual results:
  The other instances are not been terminated

  Expected results:
  The termination of the instances should continue to the other instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400966] Re: [OSSA-2014-041] Glance allows users to download and delete any file in glance-api server (CVE-2014-9493)

2015-01-08 Thread Thierry Carrez
Let's track the filesystem: case on bug 1408663, for clarity.

** Changed in: ossa
   Status: In Progress = Fix Released

** Changed in: glance
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1400966

Title:
  [OSSA-2014-041] Glance allows users to download and delete any file in
  glance-api server (CVE-2014-9493)

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in Glance juno series:
  Fix Committed
Status in Ansible playbooks for deploying OpenStack:
  Fix Committed
Status in openstack-ansible icehouse series:
  In Progress
Status in openstack-ansible juno series:
  In Progress
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Updating image-location by update images API users can download any file for 
which glance-api has read permission. 
  And the file for which glance-api has write permission will be deleted when 
users delete the image.

  
  For example:
  When users specify '/etc/passwd' as locations value of an image user can get 
the file by image download.

  When locations of an image is set with 'file:///path/to/glance-
  api.conf' the conf will be deleted when users delete the image.

  How to recreate the bug:
  download files:
   - set show_multiple_locations True in glance-api.conf
   - create a new image
   - set locations of the image's property a path you want to get such as 
file:///etc/passwd.
   - download the image

  delete files:
   - set show_multiple_locations True in glance-api.conf
   - create a new image
   - set locations of the image's property a path you want to delete such as 
file:///path/to/glance-api.conf
   - delete the image

  I found this bug in 2014.2 (742c898956d655affa7351505c8a3a5c72881eae).

  What a big A RE RE!!

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1400966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408663] [NEW] Glance still allows users to download and delete any file in glance-api server

2015-01-08 Thread Thierry Carrez
*** This bug is a security vulnerability ***

Public security bug reported:

Jin Liu reported that OSSA-2014-041 (CVE-2014-9493) only fixed the
vulnerability for swift: and file: URI, but overlooked filesystem: URIs.

Please see bug 1400966 for historical reference.

** Affects: glance
 Importance: Critical
 Status: In Progress

** Affects: glance/icehouse
 Importance: Critical
 Status: Confirmed

** Affects: glance/juno
 Importance: Critical
 Status: Confirmed

** Affects: ossa
 Importance: Critical
 Status: Confirmed

** Also affects: glance
   Importance: Undecided
   Status: New

** Information type changed from Public to Public Security

** Also affects: glance/icehouse
   Importance: Undecided
   Status: New

** Also affects: glance/juno
   Importance: Undecided
   Status: New

** Changed in: ossa
   Importance: Undecided = Critical

** Changed in: ossa
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1408663

Title:
  Glance still allows users to download and delete any file in glance-
  api server

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Glance icehouse series:
  Confirmed
Status in Glance juno series:
  Confirmed
Status in OpenStack Security Advisories:
  Confirmed

Bug description:
  Jin Liu reported that OSSA-2014-041 (CVE-2014-9493) only fixed the
  vulnerability for swift: and file: URI, but overlooked filesystem:
  URIs.

  Please see bug 1400966 for historical reference.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1408663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408656] [NEW] No proper validtion for optional arguments for nova list command

2015-01-08 Thread Sudheer Kalla
Public bug reported:


ubuntu@ubuntu-ThinkCentre-M93p:~$ nova list --te
+--+--+++-+--+
| ID   | Name | Status | Task State | Power 
State | Networks |
+--+--+++-+--+
| f51fccf3-cbaa-4ba0-a9ab-d0bbe20d2245 | test | ERROR  | -  | NOSTATE   
  |  |
+--+--+++-+--+


The above command should throw a validation error but it is giving proper 
output . So proper validation need to provided for optional arguments

** Affects: nova
 Importance: Undecided
 Assignee: Sudheer Kalla (sudheer-kalla)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Sudheer Kalla (sudheer-kalla)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408656

Title:
  No proper validtion for optional arguments for nova list  command

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  ubuntu@ubuntu-ThinkCentre-M93p:~$ nova list --te
  
+--+--+++-+--+
  | ID   | Name | Status | Task State | Power 
State | Networks |
  
+--+--+++-+--+
  | f51fccf3-cbaa-4ba0-a9ab-d0bbe20d2245 | test | ERROR  | -  | NOSTATE 
|  |
  
+--+--+++-+--+

  
  The above command should throw a validation error but it is giving proper 
output . So proper validation need to provided for optional arguments

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408658] [NEW] migration 61 downgrade fails using mysql

2015-01-08 Thread Brant Knudson
Public bug reported:


When trying to downgrade from version 61, it fails with an AttributeError.

$ keystone-manage db_sync 60
2015-01-08 08:29:56.494 CRITICAL keystone [-] AttributeError: 'MetaData' object 
has no attribute 'c'

2015-01-08 08:29:56.494 TRACE keystone Traceback (most recent call last):
2015-01-08 08:29:56.494 TRACE keystone   File /usr/local/bin/keystone-manage, 
line 6, in module
2015-01-08 08:29:56.494 TRACE keystone exec(compile(open(__file__).read(), 
__file__, 'exec'))
2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/keystone/bin/keystone-manage, line 44, in module
2015-01-08 08:29:56.494 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/keystone/keystone/cli.py, line 311, in main
2015-01-08 08:29:56.494 TRACE keystone CONF.command.cmd_class.main()
2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/keystone/keystone/cli.py, line 74, in main
2015-01-08 08:29:56.494 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/keystone/keystone/common/sql/migration_helpers.py, line 204, in 
sync_database_to_version
2015-01-08 08:29:56.494 TRACE keystone _sync_common_repo(version)
2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/keystone/keystone/common/sql/migration_helpers.py, line 160, in 
_sync_common_repo
2015-01-08 08:29:56.494 TRACE keystone init_version=init_version)
2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/oslo.db/oslo_db/sqlalchemy/migration.py, line 82, in db_sync
2015-01-08 08:29:56.494 TRACE keystone version)
2015-01-08 08:29:56.494 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py, line 202, 
in downgrade
2015-01-08 08:29:56.494 TRACE keystone return _migrate(url, repository, 
version, upgrade=False, err=err, **opts)
2015-01-08 08:29:56.494 TRACE keystone   File string, line 2, in _migrate
2015-01-08 08:29:56.494 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/util/__init__.py, 
line 160, in with_engine
2015-01-08 08:29:56.494 TRACE keystone return f(*a, **kw)
2015-01-08 08:29:56.494 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py, line 366, 
in _migrate
2015-01-08 08:29:56.494 TRACE keystone schema.runchange(ver, change, 
changeset.step)
2015-01-08 08:29:56.494 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/schema.py, line 93, 
in runchange
2015-01-08 08:29:56.494 TRACE keystone change.run(self.engine, step)
2015-01-08 08:29:56.494 TRACE keystone   File 
/usr/local/lib/python2.7/dist-packages/migrate/versioning/script/py.py, line 
148, in run
2015-01-08 08:29:56.494 TRACE keystone script_func(engine)
2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/keystone/keystone/common/sql/migrate_repo/versions/061_add_parent_project.py,
 line 50, in downgrade
2015-01-08 08:29:56.494 TRACE keystone 
migration_helpers.remove_constraints(list_constraints(meta))
2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/keystone/keystone/common/sql/migrate_repo/versions/061_add_parent_project.py,
 line 24, in list_constraints
2015-01-08 08:29:56.494 TRACE keystone 'ref_column': project_table.c.id}]
2015-01-08 08:29:56.494 TRACE keystone AttributeError: 'MetaData' object has no 
attribute 'c'
2015-01-08 08:29:56.494 TRACE keystone

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Brant Knudson (blk-u)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1408658

Title:
  migration 61 downgrade fails using mysql

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  
  When trying to downgrade from version 61, it fails with an AttributeError.

  $ keystone-manage db_sync 60
  2015-01-08 08:29:56.494 CRITICAL keystone [-] AttributeError: 'MetaData' 
object has no attribute 'c'

  2015-01-08 08:29:56.494 TRACE keystone Traceback (most recent call last):
  2015-01-08 08:29:56.494 TRACE keystone   File 
/usr/local/bin/keystone-manage, line 6, in module
  2015-01-08 08:29:56.494 TRACE keystone 
exec(compile(open(__file__).read(), __file__, 'exec'))
  2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/keystone/bin/keystone-manage, line 44, in module
  2015-01-08 08:29:56.494 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/keystone/keystone/cli.py, line 311, in main
  2015-01-08 08:29:56.494 TRACE keystone CONF.command.cmd_class.main()
  2015-01-08 08:29:56.494 TRACE keystone   File 
/opt/stack/keystone/keystone/cli.py, line 74, in main
  2015-01-08 08:29:56.494 

[Yahoo-eng-team] [Bug 1408708] [NEW] The API returns a serial console connection without an activated nova-serialproxy

2015-01-08 Thread Markus Zoeller
Public bug reported:

Problem description
===
The Nova REST API returns with server action ``os-getSerialConsole``
a connection info (a websocket URL) although the nova-serialproxy service
is *not* activated. 

Steps to reproduce
==
* Configure in ``nova.conf``
[serial_console]
enabled=true
* restart nova compute service
* boot an instance
* query serial console connection (e.g. with CLI 
  ``nova get-serial-console server``)

Expected behavior
=
Get an exception with a reason that the ``nova-serialproxy`` is not
activated.

Observed behavior
=
Get a valid looking URL which doesn't lead to an actual connection
because of the inactive nova-serialproxy.

Additional data 
===
* Nova code from master branch until commit 
31bfc6415484054457c84924ac2d824e8ce2db93 (Mon Jan 5 11:49:56 2015 +)
* A serial console client: https://github.com/larsks/novaconsole

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: proxy serial-console

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408708

Title:
  The API returns a serial console connection without an activated nova-
  serialproxy

Status in OpenStack Compute (Nova):
  New

Bug description:
  Problem description
  ===
  The Nova REST API returns with server action ``os-getSerialConsole``
  a connection info (a websocket URL) although the nova-serialproxy service
  is *not* activated. 

  Steps to reproduce
  ==
  * Configure in ``nova.conf``
  [serial_console]
  enabled=true
  * restart nova compute service
  * boot an instance
  * query serial console connection (e.g. with CLI 
``nova get-serial-console server``)

  Expected behavior
  =
  Get an exception with a reason that the ``nova-serialproxy`` is not
  activated.

  Observed behavior
  =
  Get a valid looking URL which doesn't lead to an actual connection
  because of the inactive nova-serialproxy.

  Additional data 
  ===
  * Nova code from master branch until commit 
31bfc6415484054457c84924ac2d824e8ce2db93 (Mon Jan 5 11:49:56 2015 +)
  * A serial console client: https://github.com/larsks/novaconsole

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407736] Re: python unit test jobs failing due to subunit log being too big

2015-01-08 Thread Matt Riedemann
bknudson pointed out the real issue, sqlalchemy-migrate is always
logging deprecation warnings, that's why moving the deprecation warnings
fixture in nova to after the db fixture fixed the problem for nova:

https://github.com/stackforge/sqlalchemy-
migrate/blob/master/migrate/changeset/__init__.py#L13

** Also affects: sqlalchemy-migrate
   Importance: Undecided
   Status: New

** Changed in: sqlalchemy-migrate
   Status: New = Confirmed

** Changed in: sqlalchemy-migrate
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407736

Title:
  python unit test jobs failing due to subunit log being too big

Status in OpenStack Compute (Nova):
  In Progress
Status in Database schema migration for SQLAlchemy:
  Confirmed

Bug description:
  http://logs.openstack.org/60/144760/1/check/gate-nova-
  python26/6eb86b3/console.html#_2015-01-05_10_20_01_178

  2015-01-05 10:20:01.178 | + [[ 72860 -gt 5 ]]
  2015-01-05 10:20:01.178 | + echo
  2015-01-05 10:20:01.178 | 
  2015-01-05 10:20:01.178 | + echo 'sub_unit.log was  50 MB of uncompressed 
data!!!'
  2015-01-05 10:20:01.178 | sub_unit.log was  50 MB of uncompressed data!!!
  2015-01-05 10:20:01.179 | + echo 'Something is causing tests for this project 
to log significant amounts'
  2015-01-05 10:20:01.179 | Something is causing tests for this project to log 
significant amounts
  2015-01-05 10:20:01.179 | + echo 'of data. This may be writers to python 
logging, stdout, or stderr.'
  2015-01-05 10:20:01.179 | of data. This may be writers to python logging, 
stdout, or stderr.
  2015-01-05 10:20:01.179 | + echo 'Failing this test as a result'
  2015-01-05 10:20:01.179 | Failing this test as a result
  2015-01-05 10:20:01.179 | + echo

  Looks like the subunit log is around 73 MB, this could be due to the
  new pip because I'm seeing a ton of these:

  DeprecationWarning: `require` parameter is deprecated. Use
  EntryPoint._load instead.

  The latest pip was released on 1/3/15:

  https://pypi.python.org/pypi/pip/6.0.6

  That's also when those warnings showed up:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGVwcmVjYXRpb25XYXJuaW5nOiBgcmVxdWlyZWAgcGFyYW1ldGVyIGlzIGRlcHJlY2F0ZWQuIFVzZSBFbnRyeVBvaW50Ll9sb2FkIGluc3RlYWQuXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgYW5kIHByb2plY3Q6XCJvcGVuc3RhY2svbm92YVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDIwNDc2OTk3NTI3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407736/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp