[Yahoo-eng-team] [Bug 1542352] Re: Add popular IP protocols for security group

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288291
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=bb4b7aa83cada24cb6cdf5bba805e9385a4ea306
Submitter: Jenkins
Branch:master

commit bb4b7aa83cada24cb6cdf5bba805e9385a4ea306
Author: venkatamahesh 
Date:   Fri Mar 4 13:26:44 2016 +0530

[cli-ref] Update python-neutronclient to 4.1.0

Closes-Bug: #1542352
Closes-Bug: #1537179

Change-Id: I6d74cea7616047c516e5c0bb598e03987f4a5ceb


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542352

Title:
  Add popular IP protocols for security group

Status in neutron:
  Confirmed
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/252155
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 592b548bb6720760efae4b10bec59e78a753f4d7
  Author: Li Ma 
  Date:   Wed Dec 2 10:30:22 2015 +0800

  Add popular IP protocols for security group
  
  Add these additional protocols listed below to
  security groups brings convenience to operators
  on configuring these protocols. In addition, make
  the security group rules more readable.
  
  The added protocols are: ah, dccp, egp, esp, gre,
  ipv6-encap, ipv6-frag, ipv6-nonxt, ipv6-opts,
  ipv6-route, ospf, pgm, rsvp, sctp, udplite, vrrp.
  
  A related patch is submitted to neutron-lib project:
  https://review.openstack.org/259037
  
  DocImpact: You can specify protocol names rather than
  protocol number in API and CLI commands. I'll update
  the documentation when it is merged.
  
  APIImpact
  
  Change-Id: Iaef9b650449b4d9d362a59305c45e0aa3831507c
  Closes-Bug: #1475717

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542352/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553099] [NEW] forcehost feature not working

2016-03-04 Thread Paul Carlton
Public bug reported:

The option of adding a specific nova compute host to the availability-
zone selection as per http://docs.openstack.org/user-guide-
admin/cli_nova_specify_host.html is not working, i.e. boot  ...
--availability-zone nova:server2 should schedule the instance to server2
regardless of scheduler filters or server2's capacity to accommodate the
instance.

This feature is useful for testing newly provisioned nodes or
potentially faulty nodes.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1553099

Title:
  forcehost feature not working

Status in OpenStack Compute (nova):
  New

Bug description:
  The option of adding a specific nova compute host to the availability-
  zone selection as per http://docs.openstack.org/user-guide-
  admin/cli_nova_specify_host.html is not working, i.e. boot  ...
  --availability-zone nova:server2 should schedule the instance to
  server2  regardless of scheduler filters or server2's capacity to
  accommodate the instance.

  This feature is useful for testing newly provisioned nodes or
  potentially faulty nodes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1553099/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552703] Re: Missing neutron-vyatta-agent console script

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/287798
Committed: 
https://git.openstack.org/cgit/openstack/neutron-vpnaas/commit/?id=6088eee02e9207c4c23373bbc1f385dd6cceb36a
Submitter: Jenkins
Branch:master

commit 6088eee02e9207c4c23373bbc1f385dd6cceb36a
Author: Ihar Hrachyshka 
Date:   Thu Mar 3 14:35:43 2016 +0100

vyatta: added missing agent console script

Without this change, neutron-vyatta-agent executable is not generated.

Change-Id: I994d699a6d957e7f2498f4b515b9256644d8d0f0
Closes-Bug: #1552703


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552703

Title:
  Missing neutron-vyatta-agent console script

Status in neutron:
  Fix Released

Bug description:
  Though the agent is present in latest neutron-vpnaas packages, the
  console script for the agent is not generated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552703/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352256] Re: Uploading a new object fails with Ceph as object storage backend using RadosGW

2016-03-04 Thread James Page
Adding task for Ubuntu Cloud Archive - we'll pick this up in the next
set of kilo updates.

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/kilo
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/kilo
   Status: New => Triaged

** Changed in: cloud-archive/kilo
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1352256

Title:
  Uploading a new object fails with Ceph as object storage backend using
  RadosGW

Status in Ubuntu Cloud Archive:
  Triaged
Status in Ubuntu Cloud Archive kilo series:
  Triaged
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  While uploading a new Object using Horizon, with Ceph as object
  storage backend, it fails with error mesage "Error: Unable to upload
  object"

  Ceph Release : Firefly

  Error in horizon_error.log:

  
  [Wed Jul 23 09:04:46.840751 2014] [:error] [pid 30045:tid 140685813683968] 
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 
firefly-master.ashish.com
  [Wed Jul 23 09:04:46.842984 2014] [:error] [pid 30045:tid 140685813683968] 
WARNING:urllib3.connectionpool:HttpConnectionPool is full, discarding 
connection: firefly-master.ashish.com
  [Wed Jul 23 09:04:46.843118 2014] [:error] [pid 30045:tid 140685813683968] 
REQ: curl -i http://firefly-master.ashish.com/swift/v1/new-cont-dash/test -X 
PUT -H "X-Auth-Token: 91fc8466ce17e0d22af86de9b3343b2d"
  [Wed Jul 23 09:04:46.843227 2014] [:error] [pid 30045:tid 140685813683968] 
RESP STATUS: 411 Length Required
  [Wed Jul 23 09:04:46.843584 2014] [:error] [pid 30045:tid 140685813683968] 
RESP HEADERS: [('date', 'Wed, 23 Jul 2014 09:04:46 GMT'), ('content-length', 
'238'), ('content-type', 'text/html; charset=iso-8859-1'), ('connection', 
'close'), ('server', 'Apache/2.4.7 (Ubuntu)')]
  [Wed Jul 23 09:04:46.843783 2014] [:error] [pid 30045:tid 140685813683968] 
RESP BODY: 
  [Wed Jul 23 09:04:46.843907 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843930 2014] [:error] [pid 30045:tid 140685813683968] 
411 Length Required
  [Wed Jul 23 09:04:46.843937 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843944 2014] [:error] [pid 30045:tid 140685813683968] 
Length Required
  [Wed Jul 23 09:04:46.843951 2014] [:error] [pid 30045:tid 140685813683968] 
A request of the requested method PUT requires a valid Content-length.
  [Wed Jul 23 09:04:46.843957 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843963 2014] [:error] [pid 30045:tid 140685813683968] 

  [Wed Jul 23 09:04:46.843969 2014] [:error] [pid 30045:tid 140685813683968]
  [Wed Jul 23 09:04:46.844530 2014] [:error] [pid 30045:tid 140685813683968] 
Object PUT failed: http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 
411 Length Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844555 2014] [:error] [pid 30045:tid 140685813683968] 
http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length 
Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844607 2014] [:error] [pid 30045:tid 140685813683968] 
http://firefly-master.ashish.com/swift/v1/new-cont-dash/test 411 Length 
Required  [first 60 chars of response] 
  [Wed Jul 23 09:04:46.844900 2014] [:error] [pid 30045:tid 140685813683968] 
https://bugs.launchpad.net/cloud-archive/+bug/1352256/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553128] [NEW] MOS8.0 (mirantis liberty) + neutron-lbaas-dashboard

2016-03-04 Thread magicboiz
Public bug reported:

When installing  neutron-lbaas-dashboard into Mirantis Openstack 8.0
(based on Liberty), there is nothing on Project->Network->Load Balancers
horizon's panels. No buttons, no menusnothing.

Summary of steps followed:
1. Install Openstack Liberty with Miratins FUEL 8.0 (deploys openstack with 
Ubuntu 14.04)
2. git clone neutron-lbaas-dashboard 
3. python setup.py install
4. enable newproject panel ng_loadbalancersv2 with  "Copy 
_1481_project_ng_loadbalancersv2_panel.py in neutron_lbaas_dashboard/enabled 
directory to openstack_dashboard/local/enabled"
5. /usr/share/openstack-dashboard/manage.py collectstatic
6. /usr/share/openstack-dashboard/manage.py compress
7. service apache2 restart

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-lbaas-dashboard

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553128

Title:
  MOS8.0 (mirantis liberty) + neutron-lbaas-dashboard

Status in neutron:
  New

Bug description:
  When installing  neutron-lbaas-dashboard into Mirantis Openstack 8.0
  (based on Liberty), there is nothing on Project->Network->Load
  Balancers horizon's panels. No buttons, no menusnothing.

  Summary of steps followed:
  1. Install Openstack Liberty with Miratins FUEL 8.0 (deploys openstack with 
Ubuntu 14.04)
  2. git clone neutron-lbaas-dashboard 
  3. python setup.py install
  4. enable newproject panel ng_loadbalancersv2 with  "Copy 
_1481_project_ng_loadbalancersv2_panel.py in neutron_lbaas_dashboard/enabled 
directory to openstack_dashboard/local/enabled"
  5. /usr/share/openstack-dashboard/manage.py collectstatic
  6. /usr/share/openstack-dashboard/manage.py compress
  7. service apache2 restart

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1553128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553142] [NEW] Cannot suspend instance from Details page if the Instance is not on 1st page of table

2016-03-04 Thread Timur Sufiev
Public bug reported:

Cannot suspend instance from Horizon "Details" page. Problem is
reproduced only if an instance does not belong to the 1st page of the
list.

Steps to reproduce:
  1. Boot instance
  2. Ensure that it is placed on the 2 page in the list (so it is not visible 
on 1st page of /admin/instances/ - crucial!)
  2. Proceed to System->Instances page (or Project->Compute->Instances) (page 
with instance listing)
  3. Try to suspend/resume that instance
  4. Check that it works
  5. Proceed to Details page, clicking on instance name
  6. Try to suspend instance using dropdown menu in the upper-right corner

Expected result:
  1. Instance is suspended

Real result:
  1. Nothing happened, instance is up and running.

** Affects: horizon
 Importance: Medium
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553142

Title:
  Cannot suspend instance from Details page if the Instance is not on
  1st page of table

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Cannot suspend instance from Horizon "Details" page. Problem is
  reproduced only if an instance does not belong to the 1st page of the
  list.

  Steps to reproduce:
1. Boot instance
2. Ensure that it is placed on the 2 page in the list (so it is not visible 
on 1st page of /admin/instances/ - crucial!)
2. Proceed to System->Instances page (or Project->Compute->Instances) (page 
with instance listing)
3. Try to suspend/resume that instance
4. Check that it works
5. Proceed to Details page, clicking on instance name
6. Try to suspend instance using dropdown menu in the upper-right corner

  Expected result:
1. Instance is suspended

  Real result:
1. Nothing happened, instance is up and running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553142/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527575] Re: failed to create user from domain scoped token

2016-03-04 Thread Paul Karikh
Checked with current master and looks like issue is fixed now. Issue was
fiexed in DOA 2.2.0, as Doug said.  Horizon installs in venv
django_openstack_auth==2.2.0 now. So, I think, we can close it for
Horizon as invalid.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1527575

Title:
  failed to create user from domain scoped token

Status in django-openstack-auth:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  current tests are reporting
  .Failed to create user from domain scoped token.
  2015-12-18 10:15:46.894 | .Failed to create user from domain scoped token.
  2015-12-18 10:15:46.919 | ..Failed to create user from domain scoped token.
  2015-12-18 10:15:46.925 | .Failed to create user from domain scoped token.
  2015-12-18 10:15:46.942 | .Failed to create user from domain scoped token.
  2015-12-18 10:15:46.997 | .Failed to create user from domain scoped token.

  but still passing
  E.g this one here:
  
http://logs.openstack.org/13/259013/3/check/gate-horizon-tox-py27dj18/dac7716/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1527575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553144] [NEW] When some instance aren't deleted correctly and libvirt still keep the domain for the instance, the resource tracker will failed to update available resource

2016-03-04 Thread Alex Xu
Public bug reported:

When instance was deleted in the db, but it is still at compute node,
the resource tracker will fail to update available resource.


2016-03-04 10:58:28.143 ERROR nova.compute.manager 
[req-d2f1c99b-0e81-4b6d-9361-a40bd2218141 None None] Error updating resources 
for node vm6.


2016-03-04 10:58:28.143 TRACE nova.compute.manager Traceback (most recent call 
last):
2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 6379, in 
update_available_resource
2016-03-04 10:58:28.143 TRACE nova.compute.manager 
rt.update_available_resource(context)
2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 491, in 
update_available_resource
2016-03-04 10:58:28.143 TRACE nova.compute.manager resources = 
self.driver.get_available_resource(self.nodename)
2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5414, in 
get_available_resource
2016-03-04 10:58:28.143 TRACE nova.compute.manager disk_over_committed = 
self._get_disk_over_committed_size_total()
2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 7047, in 
_get_disk_over_committed_size_total
2016-03-04 10:58:28.143 TRACE nova.compute.manager 
local_instances[guest.uuid], bdms[guest.uuid])
2016-03-04 10:58:28.143 TRACE nova.compute.manager KeyError: 
'49505c88-b38a-4100-ab56-97958b48b533'
2016-03-04 10:58:28.143 TRACE nova.compute.manager


The available resource won't get update until periodic_task
'_cleanup_running_deleted_instances' if running_deleted_instance_action
is 'reap'

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1553144

Title:
  When some instance aren't deleted correctly and libvirt still keep the
  domain for the instance, the resource tracker will failed to update
  available resource

Status in OpenStack Compute (nova):
  New

Bug description:
  When instance was deleted in the db, but it is still at compute node,
  the resource tracker will fail to update available resource.

  
  2016-03-04 10:58:28.143 ERROR nova.compute.manager 
[req-d2f1c99b-0e81-4b6d-9361-a40bd2218141 None None] Error updating resources 
for node vm6.

  
  2016-03-04 10:58:28.143 TRACE nova.compute.manager Traceback (most recent 
call last):
  2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 6379, in 
update_available_resource
  2016-03-04 10:58:28.143 TRACE nova.compute.manager 
rt.update_available_resource(context)
  2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 491, in 
update_available_resource
  2016-03-04 10:58:28.143 TRACE nova.compute.manager resources = 
self.driver.get_available_resource(self.nodename)
  2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5414, in 
get_available_resource
  2016-03-04 10:58:28.143 TRACE nova.compute.manager disk_over_committed = 
self._get_disk_over_committed_size_total()
  2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 7047, in 
_get_disk_over_committed_size_total
  2016-03-04 10:58:28.143 TRACE nova.compute.manager 
local_instances[guest.uuid], bdms[guest.uuid])
  2016-03-04 10:58:28.143 TRACE nova.compute.manager KeyError: 
'49505c88-b38a-4100-ab56-97958b48b533'
  2016-03-04 10:58:28.143 TRACE nova.compute.manager


  The available resource won't get update until periodic_task
  '_cleanup_running_deleted_instances' if
  running_deleted_instance_action is 'reap'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1553144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553149] [NEW] Instance in ERROR state due to ConnectFailure with keystone

2016-03-04 Thread Prashant Shetty
Public bug reported:

When tried to run below rally scenario with concurrency 50, seeing issue with 
keystone. Can someone take a look?
NOTE: Things will work fine with concurrency 10.

1. Create tenant, create network. 
2. Create T1 router and set external network as gateway
3. Add network created in step 1 to T1 router
4. Launch instance(on kvm) in the private network and assign FIP. Ping FIP


Setup:

Single controller(32vCPU, 48GB RAM)
3 Network Nodes
100 ESX computes and 100 KVM computes

Rally reports and logs attached to  bug.

Logs:

2016-03-01 01:26:34.699 DEBUG oslo_concurrency.lockutils 
[req-409c8595-d093-4cfe-8b98-b49d2c2accad 
ctx_rally_d6ed151ea67e4b78930c39c406fa64ed_user_0 
ctx_rally_9526f233-a1b9-446b-beb6-d14dc678ff37_tenant_10] Releasing semaphore 
"refresh_cache-8c324106-c6dd-4b90-876d-e3cc33adfebf" from (pid=26585) lock 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:225
2016-03-01 01:26:34.704 ERROR nova.compute.manager 
[req-409c8595-d093-4cfe-8b98-b49d2c2accad 
ctx_rally_d6ed151ea67e4b78930c39c406fa64ed_user_0 
ctx_rally_9526f233-a1b9-446b-beb6-d14dc678ff37_tenant_10] [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] Instance failed to spawn
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] Traceback (most recent call last):
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2190, in _build_resources
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] yield resources
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2036, in _build_and_run_instance
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] block_device_info=block_device_info)
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2758, in spawn
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] admin_pass=admin_password)
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 3251, in _create_image
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] content=files, extra_md=extra_md, 
network_info=network_info)
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/api/metadata/base.py", line 160, in __init__
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] self.network_metadata = 
netutils.get_network_metadata(network_info)
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/virt/netutils.py", line 185, in get_network_metadata
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] if not network_info:
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/network/model.py", line 526, in __len__
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] return self._sync_wrapper(fn, *args, 
**kwargs)
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/network/model.py", line 513, in _sync_wrapper
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] self.wait()
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/opt/stack/nova/nova/network/model.py", line 545, in wait
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] self[:] = self._gt.wait()
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 175, in 
wait
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] return self._exit_event.wait()
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf] return hubs.get_hub().switch()
2016-03-01 01:26:34.704 TRACE nova.compute.manager [instance: 
8c324106-c6dd-4b90-876d-e3cc33adfebf]   File 
"/usr/local/lib/python

[Yahoo-eng-team] [Bug 1553148] [NEW] Annotaion in rest create_user is confusing

2016-03-04 Thread Wang Bo
Public bug reported:

email is an optional argument, both '' and None could successfully create a new 
user with no email info. The difference is:
If email=None, the value will be NULL in db.
If email='', there is no value of email column in db.

Clear current annotation "# not sure why email is forced to None, but
other code does it"

** Affects: horizon
 Importance: Undecided
 Assignee: Wang Bo (chestack)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Wang Bo (chestack)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553148

Title:
  Annotaion in rest create_user is confusing

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  email is an optional argument, both '' and None could successfully create a 
new user with no email info. The difference is:
  If email=None, the value will be NULL in db.
  If email='', there is no value of email column in db.

  Clear current annotation "# not sure why email is forced to None, but
  other code does it"

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553148/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553152] [NEW] misleading API documentation for block_device_mapping_v2

2016-03-04 Thread adbot
Public bug reported:

Documentation [1] about `block_device_mapping_v2` when creating a server
instance is misleading as it doesn't explain that it must actually be an
array of mappings and there is no complete list of the supported keys.
For example `volume_size` and `uuid` are not even mentioned.

Thanks to an unrelated github bug [2] I figured it's something like this:
"block_device_mapping_v2": [
  {
"boot_index": "0",
"uuid": "ac408821-c95a-448f-9292-73986c790911",
"source_type": "image",
"volume_size": "25",
"destination_type": "volume",
"delete_on_termination": true
  }

The above example is something that very quickly gets you to the point.
In block_device_mapping.rst doc I see some of the things explained but
first I could only find that doc grepping nova's sources and I still
couldn't figure from that doc how in hell should I construct my API
call.

What I wanted to do is to basically launch an instance off a new custom
sized volume. That turned out very easy and conscious eventually but
finding that out took hours for me as I'm simply an API user and I have
no experience whatsoever installing, configuring, even less hacking on
OpenStack.

P.S. I'm using a similar feature in GCE. They have it even nicer. When
you specify the instance disks, it supports any options that are
supported by the api call creating a standalone disk. I guess values are
then passed to the disk api as is. Might be worth considering for a
future API version. e.g. at the moment I can't specify a name for the
new volume or many of the other options supported by the OS volumes API.

[1] http://developer.openstack.org/api-ref-compute-v2.1.html#createServer
[2] 
https://github.com/ggiamarchi/vagrant-openstack-provider/issues/209#issuecomment-73961050

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1553152

Title:
  misleading API documentation for block_device_mapping_v2

Status in OpenStack Compute (nova):
  New

Bug description:
  Documentation [1] about `block_device_mapping_v2` when creating a
  server instance is misleading as it doesn't explain that it must
  actually be an array of mappings and there is no complete list of the
  supported keys. For example `volume_size` and `uuid` are not even
  mentioned.

  Thanks to an unrelated github bug [2] I figured it's something like this:
  "block_device_mapping_v2": [
{
  "boot_index": "0",
  "uuid": "ac408821-c95a-448f-9292-73986c790911",
  "source_type": "image",
  "volume_size": "25",
  "destination_type": "volume",
  "delete_on_termination": true
}

  The above example is something that very quickly gets you to the
  point. In block_device_mapping.rst doc I see some of the things
  explained but first I could only find that doc grepping nova's sources
  and I still couldn't figure from that doc how in hell should I
  construct my API call.

  What I wanted to do is to basically launch an instance off a new
  custom sized volume. That turned out very easy and conscious
  eventually but finding that out took hours for me as I'm simply an API
  user and I have no experience whatsoever installing, configuring, even
  less hacking on OpenStack.

  P.S. I'm using a similar feature in GCE. They have it even nicer. When
  you specify the instance disks, it supports any options that are
  supported by the api call creating a standalone disk. I guess values
  are then passed to the disk api as is. Might be worth considering for
  a future API version. e.g. at the moment I can't specify a name for
  the new volume or many of the other options supported by the OS
  volumes API.

  [1] http://developer.openstack.org/api-ref-compute-v2.1.html#createServer
  [2] 
https://github.com/ggiamarchi/vagrant-openstack-provider/issues/209#issuecomment-73961050

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1553152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1545117] Re: neutron: 500 error when trying to attach no network to an instance with no network

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/279839
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d90115c3e806f896dc90e226af7b06b21ba92abc
Submitter: Jenkins
Branch:master

commit d90115c3e806f896dc90e226af7b06b21ba92abc
Author: Matt Riedemann 
Date:   Fri Feb 12 18:34:59 2016 -0800

neutron: handle attach interface case with no networks

It's possible to boot an instance without a network and it's
also possible to try and attach an interface without specifying
a port, network or fixed IP. If no network is specified on the
attach request and there are no networks available to the
instance's project (shared or not), then the
allocate_port_for_instance method fails with an IndexError.

The code is currently checking for a case of ambiguous networks
when a specific network ID is not requested, but is not checking
for the case that no network ID is specified and no networks are
available. This change adds that check.

A new type of exception is raised from the network API and handled
in the REST API so we return a 400 to the user rather than a 500.

Note that we don't return an empty network_info to the compute
manager because that results in InterfaceAttachFailed raised to
the REST API which is interpreted as a 500.

Closes-Bug: #1545117

Change-Id: Iad762ebef08c259339ea5582e65266620fbab0ac


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1545117

Title:
  neutron: 500 error when trying to attach no network to an instance
  with no network

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  It's possible to create a VM with nova using neutron as the backend
  and have no network information. If the tenant doesn't have any
  available network in neutron and doesn't request any network, the
  neutron backend will simply log a message and continue.

  If you then later attempt to use the os-attach-interface API and don't
  provider a network, and the tenant still doesn't have any networks
  available in neutron, then the request fails in the neutronv2 API code
  with an IndexError because it's assuming there is at least one
  available network:

  http://paste.openstack.org/show/486856/

  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
143, in _dispatch_and_reply
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
executor_callback))
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
189, in _dispatch
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
executor_callback)
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
130, in _do_dispatch
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions result = 
func(ctxt, **new_args)
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/exception.py", line 110, in wrapped
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions payload)
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in 
__exit__
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
six.reraise(self.type_, self.value, self.tb)
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/exception.py", line 89, in wrapped
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions return 
f(self, context, *args, **kw)
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/compute/manager.py", line 385, in decorated_function
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
kwargs['instance'], e, sys.exc_info())
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions 
  2016-02-12 09:34:32.188 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in 
__exit__
  2016-02-12 09:34:32.188 TRACE nova.api.open

[Yahoo-eng-team] [Bug 1553128] Re: MOS8.0 (mirantis liberty) + neutron-lbaas-dashboard

2016-03-04 Thread Andreas Scheuring
** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553128

Title:
  MOS8.0 (mirantis liberty) + neutron-lbaas-dashboard

Status in OpenStack Dashboard (Horizon):
  New
Status in neutron:
  Invalid

Bug description:
  When installing  neutron-lbaas-dashboard into Mirantis Openstack 8.0
  (based on Liberty), there is nothing on Project->Network->Load
  Balancers horizon's panels. No buttons, no menusnothing.

  Summary of steps followed:
  1. Install Openstack Liberty with Miratins FUEL 8.0 (deploys openstack with 
Ubuntu 14.04)
  2. git clone neutron-lbaas-dashboard 
  3. python setup.py install
  4. enable newproject panel ng_loadbalancersv2 with  "Copy 
_1481_project_ng_loadbalancersv2_panel.py in neutron_lbaas_dashboard/enabled 
directory to openstack_dashboard/local/enabled"
  5. /usr/share/openstack-dashboard/manage.py collectstatic
  6. /usr/share/openstack-dashboard/manage.py compress
  7. service apache2 restart

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487451] Re: Stale pci_stats in the DB after PCI reconfiguration

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/216049
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=47181ae3ebcd1533c22378ee31a4b1f0848926d6
Submitter: Jenkins
Branch:master

commit 47181ae3ebcd1533c22378ee31a4b1f0848926d6
Author: Ludovic Beliveau 
Date:   Wed Nov 18 11:52:39 2015 -0500

Allow saving empty pci_device_pools in ComputeNode object

Prior to this patch, saving a ComputeNode with a pci_device_pools attribute
that has no objects specified in it (empty PciDevicePool list) would result 
in
the change not being saved.  Object of type PciDevicePoolList are evaluated
like a list, thefore a conditional statement like 'if pools' will always
evaluate to False even if 'pools' is not None.

Without this fix, if 'pci_passthrough_whitelist' is cleared in the
configuration, nova scheduler still think a compute node has PCI devices
available and can still trigger scheduling an instance with PCI devices on 
the
node.

Change-Id: Ib3c19d569b9b3b23a293ad55dd9023291435d5a6
Closes-Bug: #1487451


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487451

Title:
  Stale pci_stats in the DB after PCI reconfiguration

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Changes in PCI device configuration is not reflected in the database
  (pci_stats).  After nova reboot, pci_stats still hold stale data.
  This happen only when the compute as at some point interfaces
  configured with PCI SR-IOV or passthrough (in the
  pci_passthrough_whitelist) and then all those interfaces are removed.

  Steps to reproduce:
  1. Configure SR-IOV on an interface and edit 
nova.conf/pci_passthrough_whitelist accordingly.
  2. Start nova on the compute.
  3. Remove the SR-IOV interface and it's configuration in nova.conf.
  4. Restart nova on the compute.
  5. Validate that pci_stats still hold the PCI device information by looking 
at the SQL database.

  This behavior cause the scheduler to still try to schedule an instance
  on the compute that had PCI configured even since no PCI device are
  availalble.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1487451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553184] [NEW] Slow Query on Instances Table

2016-03-04 Thread Joseph bajin
Public bug reported:

We are currently running Juno (2014.2.4). 
We have a few tenants that have a lot of instances that are created and deleted 
so they have a lot of entries in the instances table. 

I see the following bug that was not brought into Juno but was delivered
in Kilo that was suppose to help with this same type of issue, but it
didn't seem like that worked at all.
https://bugs.launchpad.net/nova/+bug/1378395

After reviewing the query a bit more, I found that another Index could
be added that dramatically dropped the time the query took to run.

This the query in mention:

SELECT anon_1.instances_deleted_at AS anon_1_instances_deleted_at, 
anon_1.instances_deleted AS anon_1_instances_deleted, 
anon_1.instances_created_at AS anon_1_instances_created_at, 
anon_1.instances_updated_at AS anon_1_instances_updated_at, anon_1.instances_id 
AS anon_1_instances_id, anon_1.instances_user_id AS anon_1_instances_user_id, 
anon_1.instances_project_id AS anon_1_instances_project_id, 
anon_1.instances_image_ref AS anon_1_instances_image_ref, 
anon_1.instances_kernel_id AS anon_1_instances_kernel_id, 
anon_1.instances_ramdisk_id AS anon_1_instances_ramdisk_id, 
anon_1.instances_hostname AS anon_1_instances_hostname, 
anon_1.instances_launch_index AS anon_1_instances_launch_index, 
anon_1.instances_key_name AS anon_1_instances_key_name, 
anon_1.instances_key_data AS anon_1_instances_key_data, 
anon_1.instances_power_state AS anon_1_instances_power_state, 
anon_1.instances_vm_state AS anon_1_instances_vm_state, 
anon_1.instances_task_state AS anon_1_instances_task_state, anon_1.instan
 ces_memory_mb AS anon_1_instances_memory_mb, anon_1.instances_vcpus AS 
anon_1_instances_vcpus, anon_1.instances_root_gb AS anon_1_instances_root_gb, 
anon_1.instances_ephemeral_gb AS anon_1_instances_ephemeral_gb, 
anon_1.instances_ephemeral_key_uuid AS anon_1_instances_ephemeral_key_uuid, 
anon_1.instances_host AS anon_1_instances_host, anon_1.instances_node AS 
anon_1_instances_node, anon_1.instances_instance_type_id AS 
anon_1_instances_instance_type_id, anon_1.instances_user_data AS 
anon_1_instances_user_data, anon_1.instances_reservation_id AS 
anon_1_instances_reservation_id, anon_1.instances_scheduled_at AS 
anon_1_instances_scheduled_at, anon_1.instances_launched_at AS 
anon_1_instances_launched_at, anon_1.instances_terminated_at AS 
anon_1_instances_terminated_at, anon_1.instances_availability_zone AS 
anon_1_instances_availability_zone, anon_1.instances_display_name AS 
anon_1_instances_display_name, anon_1.instances_display_description AS 
anon_1_instances_display_description, anon_1
 .instances_launched_on AS anon_1_instances_launched_on, 
anon_1.instances_locked AS anon_1_instances_locked, anon_1.instances_locked_by 
AS anon_1_instances_locked_by, anon_1.instances_os_type AS 
anon_1_instances_os_type, anon_1.instances_architecture AS 
anon_1_instances_architecture, anon_1.instances_vm_mode AS 
anon_1_instances_vm_mode, anon_1.instances_uuid AS anon_1_instances_uuid, 
anon_1.instances_root_device_name AS anon_1_instances_root_device_name, 
anon_1.instances_default_ephemeral_device AS 
anon_1_instances_default_ephemeral_device, anon_1.instances_default_swap_device 
AS anon_1_instances_default_swap_device, anon_1.instances_config_drive AS 
anon_1_instances_config_drive, anon_1.instances_access_ip_v4 AS 
anon_1_instances_access_ip_v4, anon_1.instances_access_ip_v6 AS 
anon_1_instances_access_ip_v6, anon_1.instances_auto_disk_config AS 
anon_1_instances_auto_disk_config, anon_1.instances_progress AS 
anon_1_instances_progress, anon_1.instances_shutdown_terminate AS anon_1_instanc
 es_shutdown_terminate, anon_1.instances_disable_terminate AS 
anon_1_instances_disable_terminate, anon_1.instances_cell_name AS 
anon_1_instances_cell_name, anon_1.instances_internal_id AS 
anon_1_instances_internal_id, anon_1.instances_cleaned AS 
anon_1_instances_cleaned, instance_info_caches_1.deleted_at AS 
instance_info_caches_1_deleted_at, instance_info_caches_1.deleted AS 
instance_info_caches_1_deleted, instance_info_caches_1.created_at AS 
instance_info_caches_1_created_at, instance_info_caches_1.updated_at AS 
instance_info_caches_1_updated_at, instance_info_caches_1.id AS 
instance_info_caches_1_id, instance_info_caches_1.network_info AS 
instance_info_caches_1_network_info, instance_info_caches_1.instance_uuid AS 
instance_info_caches_1_instance_uuid, security_groups_1.deleted_at AS 
security_groups_1_deleted_at, security_groups_1.deleted AS 
security_groups_1_deleted, security_groups_1.created_at AS 
security_groups_1_created_at, security_groups_1.updated_at AS 
security_groups_1_upda
 ted_at, security_groups_1.id AS security_groups_1_id, security_groups_1.name 
AS security_groups_1_name, security_groups_1.description AS 
security_groups_1_description, security_groups_1.user_id AS 
security_groups_1_user_id, security_groups_1.project_id AS 
security_groups_1_project_id
FROM (SELECT instances.deleted_at AS instances_deleted_at, i

[Yahoo-eng-team] [Bug 1321785] Re: RFE: block_device_info dict should have a password key rather than clear password

2016-03-04 Thread Daniel Berrange
The ovo change merely added a new SensitiveString field type. We still
have to actually convert Nova to use that new field type where needed

** Changed in: nova
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1321785

Title:
  RFE: block_device_info dict should have a password key rather than
  clear password

Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.versionedobjects:
  Fix Released

Bug description:
  See bug 1319943 and the related patch
  https://review.openstack.org/#/c/93787/ for details, but right now the
  block_device_info dict passed around in the nova virt driver can
  contain a clear text password for the auth_password key.

  That bug and patch are masking the password when logged in the
  immediate known locations, but this could continue to crop up so we
  should change the design such that the block_device_info dict doesn't
  contain the password but rather a key to a store that nova can
  retrieve the password for use.

  Comment from Daniel Berrange in the patch above:

  "Long term I think we need to figure out a way to remove the passwords
  from any data dicts we pass around. Ideally the block device info
  would merely contain something like a UUID to identify a password,
  which Nova could use to fetch the actual password from a secure
  password manager service at time of use. Thus we wouldn't have to
  worry about random objects/dicts containing actual passwords.
  Obviously this isn't something we can do now, but could you file an
  RFE to address this from a design POV, because masking passwords at
  time of logging call is not really a viable long term strategy IMHO."

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1321785/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447342] Re: libvirtError: XML error: Missing CPU model name lead to compute service fail to start

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/286868
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=04bbf658e7998d40ffbdb2b6467dfab8dc5fde55
Submitter: Jenkins
Branch:master

commit 04bbf658e7998d40ffbdb2b6467dfab8dc5fde55
Author: Matt Riedemann 
Date:   Tue Mar 1 17:36:38 2016 -0500

libvirt: don't attempt to get baseline cpu features if host cpu model is 
None

In certain cases, libvirt can't determine the host's CPU model.

This is fine when you're setting virt_type=qemu and cpu_mode=none,
for example (like with nested virtualization).

If we can't determine the host's cpu model, don't attempt to get
cpu features on startup of the compute service (since it will
crash the service).

Change-Id: I81ae5a04c7b4eb84e976902a575d890d4e850151
Closes-Bug: #1447342


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447342

Title:
  libvirtError: XML error: Missing CPU model name lead to compute
  service fail to start

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  New
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  got following error and failed to start a compute service
  not sure if we should disallow compute service to start 
  if 'libvirtError: XML error: Missing CPU model name' or not

  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup result = 
function(*args, **kwargs)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/openstack/common/service.py", line 497, in run_service
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
service.start()
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/service.py", line 164, in start
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/compute/manager.py", line 1258, in init_host
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
self.driver.init_host(host=self.host)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 529, in init_host
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
self._do_quality_warnings()
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 507, in _do_quality_warnings
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup caps = 
self._host.get_capabilities()
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/opt/stack/nova/nova/virt/libvirt/host.py", line 753, in get_capabilities
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup result = 
proxy_call(self._autowrap, f, *args, **kwargs)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_call
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup rv = 
execute(f, *args, **kwargs)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in execute
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup 
six.reraise(c, e, tb)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup rv = 
meth(*args, **kwargs)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 3153, in baselineCPU
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup if ret is 
None: raise libvirtError ('virConnectBaselineCPU() failed', conn=self)
  2015-04-20 14:06:57.351 TRACE nova.openstack.common.threadgroup libvirtError: 
XML error: Missing CPU model name

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538014] Re: the update time is not updated when update zone of aggregate

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/284023
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b4f7066e7c074dd2fb50d99df593432937c7a8ae
Submitter: Jenkins
Branch:master

commit b4f7066e7c074dd2fb50d99df593432937c7a8ae
Author: Pallavi 
Date:   Wed Feb 24 14:47:36 2016 +0530

Update time is not updated when metadata of aggregate is updated

For example, When aggregate zone is updated, time is not getting updated
in updated_at field.

So, modified the code such that time gets updated in updated_at field
whenever the aggregate metadata content gets modified.

Change-Id: Icb65313ba85562fadeddbc1890ca5d463e74d3c2
Closes-Bug: #1538014


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538014

Title:
  the update time is not updated when update zone of aggregate

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  [Summary]
  the update time is not updated when update zone of aggregate

  [Topo]
  devstack all-in-one node

  [Description and expect result]
  the update time is updated when update zone of aggregate

  [Reproduceable or not]
  reproduceable 

  [Recreate Steps]
  1) the update time is not updated when update zone of aggregate:
  root@45-59:~# openstack aggregate  set --zone "AB" agg1
  +---+-+
  | Field | Value   |
  +---+-+
  | availability_zone | AB  |
  | created_at| 2016-01-26T11:34:00.00  |
  | deleted   | False   |
  | deleted_at| None|
  | hosts | []  |
  | id| 5   |
  | metadata  | {u'abc': u'1', u'availability_zone': u'AB'} |
  | name  | agg1|
  | updated_at| None>>>ISSUE|
  +---+-+
  root@45-59:~# openstack aggregate  set --zone "ab" agg1
  +---+-+
  | Field | Value   |
  +---+-+
  | availability_zone | ab  |
  | created_at| 2016-01-26T11:34:00.00  |
  | deleted   | False   |
  | deleted_at| None|
  | hosts | []  |
  | id| 5   |
  | metadata  | {u'abc': u'1', u'availability_zone': u'ab'} |
  | name  | agg1|
  | updated_at| None>>>ISSUE|
  +---+-+

  
  [Configration]
  reproduceable bug, no need

  [logs]
  reproduceable bug, no need

  [Root cause anlyze or debug inf]
  reproduceable bug

  [Attachment]
  None

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1538014/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505153] Re: gates broken by WebOb 1.5 release

2016-03-04 Thread Matt Riedemann
** Changed in: cinder/kilo
   Status: New => Fix Released

** Changed in: cinder/kilo
 Assignee: (unassigned) => John Griffith (john-griffith)

** Changed in: nova/kilo
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova/kilo
   Importance: Undecided => Critical

** Changed in: cinder/kilo
   Importance: Undecided => Critical

** Tags removed: kilo-backport-potential liberty-backport-potential
liberty-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505153

Title:
  gates broken by WebOb 1.5 release

Status in Cinder:
  Fix Released
Status in Cinder kilo series:
  Fix Released
Status in Manila:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in openstack-ansible:
  Fix Released

Bug description:
  Hi,

  WebOb 1.5 was released yesterday. test_misc of Cinder starts failing
  with this release. I wrote this simple fix which should be enough to
  repair it:

  https://review.openstack.org/233528
  "Fix test_misc for WebOb 1.5"

   class ConvertedException(webob.exc.WSGIHTTPException):
  -def __init__(self, code=0, title="", explanation=""):
  +def __init__(self, code=500, title="", explanation=""):

  Victor

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1505153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526675] Re: test_models_sync fails with 'Models and migration scripts aren't in sync'

2016-03-04 Thread Matt Riedemann
This was fixed for nova in stable/liberty with this squashed backport:

https://github.com/openstack/nova/commit/94d6b692d8d81e68ca7cf9e66e80adb03b8a88ef

** Changed in: nova/liberty
   Status: Confirmed => Fix Released

** Changed in: nova/liberty
 Assignee: (unassigned) => Sean Dague (sdague)

** Changed in: glance/kilo
   Importance: Undecided => Critical

** Changed in: glance/liberty
   Importance: Undecided => Critical

** Tags removed: liberty-backport-potential
** Tags added: in-stable-kilo in-stable-liberty

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526675

Title:
  test_models_sync fails with 'Models and migration scripts aren't in
  sync'

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in Glance liberty series:
  Fix Committed
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Fix Released
Status in oslo.db:
  Fix Released

Bug description:
  2015-12-16 08:06:12.907 | 2015-12-16 08:06:12.889 | 
neutron.tests.functional.db.test_migrations.TestModelsMigrationsMysql.test_models_sync
  2015-12-16 08:06:12.907 | 2015-12-16 08:06:12.891 | 
--
  2015-12-16 08:06:12.908 | 2015-12-16 08:06:12.892 | 
  2015-12-16 08:06:12.908 | 2015-12-16 08:06:12.894 | Captured traceback:
  2015-12-16 08:06:12.908 | 2015-12-16 08:06:12.896 | ~~~
  2015-12-16 08:06:12.909 | 2015-12-16 08:06:12.897 | Traceback (most 
recent call last):
  2015-12-16 08:06:12.916 | 2015-12-16 08:06:12.899 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 603, in test_models_sync
  2015-12-16 08:06:12.916 | 2015-12-16 08:06:12.900 | "Models and 
migration scripts aren't in sync:\n%s" % msg)
  2015-12-16 08:06:12.916 | 2015-12-16 08:06:12.901 |   File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/unittest2/case.py",
 line 690, in fail
  2015-12-16 08:06:12.916 | 2015-12-16 08:06:12.903 | raise 
self.failureException(msg)
  2015-12-16 08:06:12.916 | 2015-12-16 08:06:12.904 | AssertionError: 
Models and migration scripts aren't in sync:
  2015-12-16 08:06:12.917 | 2015-12-16 08:06:12.905 | [ [ ( 'modify_type',
  2015-12-16 08:06:12.918 | 2015-12-16 08:06:12.907 |   None,
  2015-12-16 08:06:12.920 | 2015-12-16 08:06:12.908 |   'floatingips',
  2015-12-16 08:06:12.921 | 2015-12-16 08:06:12.909 |   
'standard_attr_id',
  2015-12-16 08:06:12.930 | 2015-12-16 08:06:12.911 |   { 
'existing_nullable': False,
  2015-12-16 08:06:12.931 | 2015-12-16 08:06:12.912 | 
'existing_server_default': False},
  2015-12-16 08:06:12.931 | 2015-12-16 08:06:12.913 |   
BIGINT(display_width=20),
  2015-12-16 08:06:12.931 | 2015-12-16 08:06:12.915 |   Variant())],
  2015-12-16 08:06:12.931 | 2015-12-16 08:06:12.916 |   [ ( 'modify_type',
  2015-12-16 08:06:12.931 | 2015-12-16 08:06:12.918 |   None,
  2015-12-16 08:06:12.932 | 2015-12-16 08:06:12.919 |   'networks',
  2015-12-16 08:06:12.933 | 2015-12-16 08:06:12.921 |   
'standard_attr_id',
  2015-12-16 08:06:12.934 | 2015-12-16 08:06:12.922 |   { 
'existing_nullable': False,
  2015-12-16 08:06:12.935 | 2015-12-16 08:06:12.923 | 
'existing_server_default': False},
  2015-12-16 08:06:12.937 | 2015-12-16 08:06:12.925 |   
BIGINT(display_width=20),
  2015-12-16 08:06:12.938 | 2015-12-16 08:06:12.926 |   Variant())],
  2015-12-16 08:06:12.940 | 2015-12-16 08:06:12.927 |   [ ( 'modify_type',
  2015-12-16 08:06:12.941 | 2015-12-16 08:06:12.929 |   None,
  2015-12-16 08:06:12.942 | 2015-12-16 08:06:12.930 |   'ports',
  2015-12-16 08:06:12.944 | 2015-12-16 08:06:12.932 |   
'standard_attr_id',
  2015-12-16 08:06:12.945 | 2015-12-16 08:06:12.933 |   { 
'existing_nullable': False,
  2015-12-16 08:06:12.946 | 2015-12-16 08:06:12.935 | 
'existing_server_default': False},
  2015-12-16 08:06:12.948 | 2015-12-16 08:06:12.936 |   
BIGINT(display_width=20),
  2015-12-16 08:06:12.949 | 2015-12-16 08:06:12.938 |   Variant())],
  2015-12-16 08:06:12.951 | 2015-12-16 08:06:12.939 |   [ ( 'modify_type',
  2015-12-16 08:06:12.952 | 2015-12-16 08:06:12.941 |   None,
  2015-12-16 08:06:12.954 | 2015-12-16 08:06:12.942 |   'routers',
  2015-12-16 08:06:12.955 | 2015-12-16 08:06:12.943 |   
'standard_attr_id',
  2015-12-16 08:06:12.957 | 2015-12-16 08:06:12.945 |   { 
'existing_nullable': False,
  2015-12-16 08:06:12.958 | 2015-12-16 08:06:12.946 | 
'existing_server_default': False},
  2015-12-16 08:06:12.959 | 2015-12-16

[Yahoo-eng-team] [Bug 1482066] Re: cannot delete nova instance if volume is not active

2016-03-04 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1482066

Title:
  cannot delete nova instance if volume is not active

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress

Bug description:
  I booted from an encrypted volume, when try to delete it, it failed

  'Device ip-10.238.157.47:3260-iscsi-iqn.2010-10.org.openstack:volume-
  d167a0ac-fab5-484a-865d-667f1583c2ab-lun-1 is not active.\n'

  | fault| {"message": "Unexpected error while 
running command.
 |
  |  | Command: sudo nova-rootwrap 
/etc/nova/rootwrap.conf cryptsetup luksClose 
ip-10.238.157.47:3260-iscsi-iqn.2010-10.org.openstack:volume-d167a0ac-fab5-484a-865d-667f1583c2ab-lun-1
 |
  |  | Exit code: 4 

|
  |  | Stdout: u''  

|
  |  | Stderr: u'Dev", "code": 500, 
"details": "  File \"/opt/stack/nova/nova/compute/manager.py\", line 351, in 
decorated_function |
  |  | return function(self, context, 
*args, **kwargs)
  |
  |  |   File 
\"/opt/stack/nova/nova/compute/manager.py\", line 2370, in terminate_instance   

  |
  |  | do_terminate_instance(instance, 
bdms)   
 |
  |  |   File 
\"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py\", line 
252, in inner   
   |
  |  | return f(*args, **kwargs)

|
  |  |   File 
\"/opt/stack/nova/nova/compute/manager.py\", line 2368, in 
do_terminate_instance   
   |
  |  | 
self._set_instance_error_state(context, instance)   

 |
  |  |   File 
\"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py\", line 119, in 
__exit__
  |
  |  | six.reraise(self.type_, 
self.value, self.tb)
 |
  |  |   File 
\"/opt/stack/nova/nova/compute/manager.py\", line 2358, in 
do_terminate_instance   
   |
  |  | self._delete_instance(context, 
instance, bdms, quotas) 
  |
  |  |   File 
\"/opt/stack/nova/nova/hooks.py\", line 149, in inner   

  |
  |  | rv = f(*args, **kwargs)  

|
  |  |   File 
\"/opt/stack/nova/nova/compute/manager.py\", line 2337, in _delete_instance 

  |
  |  | quotas.rollback()  

[Yahoo-eng-team] [Bug 1553216] [NEW] keystone-manage bootstrap does not work for non-SQL identity drivers

2016-03-04 Thread Matthew Edmonds
Public bug reported:

keystone-manage bootstrap attempts to create the specified user and then
handles a Conflict error as notice that the user already exists. This
works for the default SQL identity driver, but does not work for drivers
that do not support creating users. In order to work for all drivers,
which is necessary to support role assignment bootstrapping whenever the
driver configuration is changed, it should attempt to GET the user or
otherwise check in a way that will work for drivers that do not support
user creation.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1553216

Title:
  keystone-manage bootstrap does not work for non-SQL identity drivers

Status in OpenStack Identity (keystone):
  New

Bug description:
  keystone-manage bootstrap attempts to create the specified user and
  then handles a Conflict error as notice that the user already exists.
  This works for the default SQL identity driver, but does not work for
  drivers that do not support creating users. In order to work for all
  drivers, which is necessary to support role assignment bootstrapping
  whenever the driver configuration is changed, it should attempt to GET
  the user or otherwise check in a way that will work for drivers that
  do not support user creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1553216/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523646] Re: Nova/Cinder Key Manager for Barbican Uses Stale Cache

2016-03-04 Thread Matt Riedemann
** Changed in: cinder/liberty
   Status: New => Fix Released

** Changed in: cinder
   Importance: Undecided => High

** Changed in: cinder/liberty
   Importance: Undecided => High

** Changed in: cinder/liberty
 Assignee: (unassigned) => Dave McCowan (dave-mccowan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523646

Title:
  Nova/Cinder Key Manager for Barbican Uses Stale Cache

Status in castellan:
  Fix Released
Status in Cinder:
  Fix Released
Status in Cinder liberty series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in OpenStack Security Notes:
  Confirmed

Bug description:
  The Key Manger for Barbican, implemented in Nova and Cinder, caches a value 
of barbican_client to save extra
  calls to Keystone for authentication.  However, the cached value of 
barbican_client is only valid for the current
  context.  A check needs to be made to ensure the context has not changed 
before using the saved value.

  The symptoms for using a stale cache value include getting the following 
error message when creating
  an encrypted volume.

  From CLI:
  ---
  openstack volume create --size 1 --type LUKS encrypted_volume
  The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-aea6be92-020e-41ed-ba88-44a1f5235ab0)

  
  In cinder.log
  ---
  2015-12-03 09:09:03.648 TRACE cinder.volume.api Traceback (most recent call 
last):
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", 
line 82, in _exe
  cute_task
  2015-12-03 09:09:03.648 TRACE cinder.volume.api result = 
task.execute(**arguments)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 409, in 
execute
  2015-12-03 09:09:03.648 TRACE cinder.volume.api source_volume)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 338, in 
_get_encryption_key_
  id
  2015-12-03 09:09:03.648 TRACE cinder.volume.api encryption_key_id = 
key_manager.create_key(context)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/keymgr/barbican.py", line 147, in create_key
  2015-12-03 09:09:03.648 TRACE cinder.volume.api LOG.exception(_LE("Error 
creating key."))
  ….
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 502, in post
  2015-12-03 09:09:03.648 TRACE cinder.volume.api return self.request(url, 
'POST', **kwargs)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 337, in inner
  2015-12-03 09:09:03.648 TRACE cinder.volume.api return func(*args, 
**kwargs)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 402, in 
request
  2015-12-03 09:09:03.648 TRACE cinder.volume.api raise 
exceptions.from_response(resp, method, url)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api Unauthorized: The request you 
have made requires authentication. (Disable debug mode to suppress these 
details.) (HTTP 401) (Request-ID: req-d2c52e0b-c16d-43ec-a7a0-763f1270)

To manage notifications about this bug go to:
https://bugs.launchpad.net/castellan/+bug/1523646/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523646] Re: Nova/Cinder Key Manager for Barbican Uses Stale Cache

2016-03-04 Thread Sean McGinnis
** Also affects: cinder/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523646

Title:
  Nova/Cinder Key Manager for Barbican Uses Stale Cache

Status in castellan:
  Fix Released
Status in Cinder:
  Fix Released
Status in Cinder liberty series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in OpenStack Security Notes:
  Confirmed

Bug description:
  The Key Manger for Barbican, implemented in Nova and Cinder, caches a value 
of barbican_client to save extra
  calls to Keystone for authentication.  However, the cached value of 
barbican_client is only valid for the current
  context.  A check needs to be made to ensure the context has not changed 
before using the saved value.

  The symptoms for using a stale cache value include getting the following 
error message when creating
  an encrypted volume.

  From CLI:
  ---
  openstack volume create --size 1 --type LUKS encrypted_volume
  The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-aea6be92-020e-41ed-ba88-44a1f5235ab0)

  
  In cinder.log
  ---
  2015-12-03 09:09:03.648 TRACE cinder.volume.api Traceback (most recent call 
last):
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", 
line 82, in _exe
  cute_task
  2015-12-03 09:09:03.648 TRACE cinder.volume.api result = 
task.execute(**arguments)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 409, in 
execute
  2015-12-03 09:09:03.648 TRACE cinder.volume.api source_volume)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 338, in 
_get_encryption_key_
  id
  2015-12-03 09:09:03.648 TRACE cinder.volume.api encryption_key_id = 
key_manager.create_key(context)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/keymgr/barbican.py", line 147, in create_key
  2015-12-03 09:09:03.648 TRACE cinder.volume.api LOG.exception(_LE("Error 
creating key."))
  ….
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 502, in post
  2015-12-03 09:09:03.648 TRACE cinder.volume.api return self.request(url, 
'POST', **kwargs)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 337, in inner
  2015-12-03 09:09:03.648 TRACE cinder.volume.api return func(*args, 
**kwargs)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 402, in 
request
  2015-12-03 09:09:03.648 TRACE cinder.volume.api raise 
exceptions.from_response(resp, method, url)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api Unauthorized: The request you 
have made requires authentication. (Disable debug mode to suppress these 
details.) (HTTP 401) (Request-ID: req-d2c52e0b-c16d-43ec-a7a0-763f1270)

To manage notifications about this bug go to:
https://bugs.launchpad.net/castellan/+bug/1523646/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523646] Re: Nova/Cinder Key Manager for Barbican Uses Stale Cache

2016-03-04 Thread Matt Riedemann
** Tags removed: liberty-backport-potential

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => In Progress

** Changed in: nova/liberty
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova/liberty
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1523646

Title:
  Nova/Cinder Key Manager for Barbican Uses Stale Cache

Status in castellan:
  Fix Released
Status in Cinder:
  Fix Released
Status in Cinder liberty series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in OpenStack Security Notes:
  Confirmed

Bug description:
  The Key Manger for Barbican, implemented in Nova and Cinder, caches a value 
of barbican_client to save extra
  calls to Keystone for authentication.  However, the cached value of 
barbican_client is only valid for the current
  context.  A check needs to be made to ensure the context has not changed 
before using the saved value.

  The symptoms for using a stale cache value include getting the following 
error message when creating
  an encrypted volume.

  From CLI:
  ---
  openstack volume create --size 1 --type LUKS encrypted_volume
  The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-aea6be92-020e-41ed-ba88-44a1f5235ab0)

  
  In cinder.log
  ---
  2015-12-03 09:09:03.648 TRACE cinder.volume.api Traceback (most recent call 
last):
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", 
line 82, in _exe
  cute_task
  2015-12-03 09:09:03.648 TRACE cinder.volume.api result = 
task.execute(**arguments)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 409, in 
execute
  2015-12-03 09:09:03.648 TRACE cinder.volume.api source_volume)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/volume/flows/api/create_volume.py", line 338, in 
_get_encryption_key_
  id
  2015-12-03 09:09:03.648 TRACE cinder.volume.api encryption_key_id = 
key_manager.create_key(context)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/opt/stack/cinder/cinder/keymgr/barbican.py", line 147, in create_key
  2015-12-03 09:09:03.648 TRACE cinder.volume.api LOG.exception(_LE("Error 
creating key."))
  ….
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 502, in post
  2015-12-03 09:09:03.648 TRACE cinder.volume.api return self.request(url, 
'POST', **kwargs)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 337, in inner
  2015-12-03 09:09:03.648 TRACE cinder.volume.api return func(*args, 
**kwargs)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api   File 
"/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 402, in 
request
  2015-12-03 09:09:03.648 TRACE cinder.volume.api raise 
exceptions.from_response(resp, method, url)
  2015-12-03 09:09:03.648 TRACE cinder.volume.api Unauthorized: The request you 
have made requires authentication. (Disable debug mode to suppress these 
details.) (HTTP 401) (Request-ID: req-d2c52e0b-c16d-43ec-a7a0-763f1270)

To manage notifications about this bug go to:
https://bugs.launchpad.net/castellan/+bug/1523646/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553224] [NEW] keystone-manage bootstrap assumes user-project role assignment

2016-03-04 Thread Matthew Edmonds
Public bug reported:

keystone-manage bootstrap creates a role assignment for the specified
user on the specified project. That is one way someone might want to do
bootstrapping, but there are good reasons a user may need/prefer:

1) user-domain role assignment... e.g. Switching identity drivers for an
existing single-domain multi-project configuration. Bootstrapping is
needed to configure the initial role assignments for the new driver.
Since the "cloud admin" concept is not essential for single-domain
environments, it may very well not be configured, yet the initial role
assignment needs to grant someone the ability to create additional role
assignments for all projects in the domain. This would be a domain
admin.

2) group-project role assignment... e.g. Where the desired end result is
for a group-project role assignment on the cloud admin project, it makes
more sense to allow that to be created directly (which could be done
without even knowing the password of any user in that group) than to
require bootstrapping of a single user and then using their account to
create the group assignment and delete the bootstrapped assignment.

3) group-domain role assignment... e.g. combination of #1 and #2

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1553224

Title:
  keystone-manage bootstrap assumes user-project role assignment

Status in OpenStack Identity (keystone):
  New

Bug description:
  keystone-manage bootstrap creates a role assignment for the specified
  user on the specified project. That is one way someone might want to
  do bootstrapping, but there are good reasons a user may need/prefer:

  1) user-domain role assignment... e.g. Switching identity drivers for
  an existing single-domain multi-project configuration. Bootstrapping
  is needed to configure the initial role assignments for the new
  driver. Since the "cloud admin" concept is not essential for single-
  domain environments, it may very well not be configured, yet the
  initial role assignment needs to grant someone the ability to create
  additional role assignments for all projects in the domain. This would
  be a domain admin.

  2) group-project role assignment... e.g. Where the desired end result
  is for a group-project role assignment on the cloud admin project, it
  makes more sense to allow that to be created directly (which could be
  done without even knowing the password of any user in that group) than
  to require bootstrapping of a single user and then using their account
  to create the group assignment and delete the bootstrapped assignment.

  3) group-domain role assignment... e.g. combination of #1 and #2

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1553224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553231] [NEW] neutron options in generated config have warnings

2016-03-04 Thread Pavlo Shchelokovskyy
Public bug reported:

This is related to bug 1486590

When I run

tox -egenconfig

on current master, I get following warnings in generated
nova.conf.sample  (for all keystoneauth plugin related options)

# Warning: Failed to format sample for auth_url 

   
# isinstance() arg 2 must be a class, type, or tuple of classes and types 

This is because keustoneauth returns its own option objects instead of
oslo.config ones. They must be properly converted to oslo objects before
generating config.

** Affects: nova
 Importance: Undecided
 Assignee: Pavlo Shchelokovskyy (pshchelo)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1553231

Title:
  neutron options in generated config have warnings

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  This is related to bug 1486590

  When I run

  tox -egenconfig

  on current master, I get following warnings in generated
  nova.conf.sample  (for all keystoneauth plugin related options)

  # Warning: Failed to format sample for auth_url   

 
  # isinstance() arg 2 must be a class, type, or tuple of classes and types 

  This is because keustoneauth returns its own option objects instead of
  oslo.config ones. They must be properly converted to oslo objects
  before generating config.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1553231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506958] Re: TypeError: object.__new__(thread.lock) is not safe, use thread.lock.__new__()

2016-03-04 Thread Matt Riedemann
Removing kilo-backport-potential since I don't think we really supported
running nova api with wsgi in kilo.

** Changed in: nova
 Assignee: Jay Pipes (jaypipes) => Marian Horban (mhorban)

** Tags removed: kilo-backport-potential

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506958

Title:
  TypeError: object.__new__(thread.lock) is not safe, use
  thread.lock.__new__()

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress

Bug description:
  When using /usr/bin/nova-api, running $ openstack  availability zone
  list -> works fine.

  If using the wsgi scripts, and running nova-api via e.g. uwsgi, the
  same client command fails as following:

  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions 
[req-184fd1f3-ae97-49d0-85dd-05ef08800238 0e56b818bc9c4eaea4b8d6a2f5da6227 
906359c0c71749ceb27e46612e0419ce - - -] Unexpected exception in API method
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/availability_zone.py",
 line 115, in detail
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions return 
self._describe_availability_zones_verbose(context)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/availability_zone.py",
 line 61, in _describe_availability_zones_verbose
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions ctxt = 
context.elevated()
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/context.py", line 198, in elevated
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions context 
= copy.deepcopy(self)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 190, in deepcopy
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
_reconstruct(x, rv, 1, memo)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 334, in _reconstruct
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions state = 
deepcopy(state, memo)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 163, in deepcopy
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
copier(x, memo)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 257, in _deepcopy_dict
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions 
y[deepcopy(key, memo)] = deepcopy(value, memo)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 190, in deepcopy
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
_reconstruct(x, rv, 1, memo)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 334, in _reconstruct
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions state = 
deepcopy(state, memo)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 163, in deepcopy
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
copier(x, memo)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 257, in _deepcopy_dict
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions 
y[deepcopy(key, memo)] = deepcopy(value, memo)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 190, in deepcopy
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
_reconstruct(x, rv, 1, memo)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy.py", line 329, in _reconstruct
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions y = 
callable(*args)
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/copy_reg.py", line 93, in __newobj__
  2015-10-16 16:58:20.720 18938 ERROR nova.api.openstack.extensions

[Yahoo-eng-team] [Bug 1288438] Re: Neutron server takes a long time to recover from VIP move

2016-03-04 Thread Louis Bouchard
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1288438

Title:
  Neutron server takes a long time to recover from VIP move

Status in Fuel for OpenStack:
  Fix Committed
Status in neutron:
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Trusty:
  New

Bug description:
  Neutron waits sequentially for read_timeout seconds for each
  connection in its connection pool. The default pool_size is 10 so it
  takes 10 minutes for Neutron server to be available after the VIP is
  moved.

  This is log output from neutron-server after the VIP has been moved:
  2014-03-05 17:48:23.844 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:49:23.887 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:50:24.055 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:51:24.067 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:52:24.079 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:53:24.115 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:54:24.123 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:55:24.131 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:56:24.143 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:57:24.163 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')

  Here is the log output after the pool_size was changed to 7 and the 
read_timeout to 30.
  2014-03-05 18:50:25.300 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:50:55.331 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:51:25.351 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:51:55.387 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:52:25.415 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:52:55.427 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:53:25.439 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:53:25.549 15731 INFO urllib3.connectionpool [-] Starting new 
HTTP connection (1): 192.168.0.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1288438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1516578] Re: Add ppc architecture for NUMA

2016-03-04 Thread Matt Riedemann
** Changed in: nova
   Importance: Wishlist => Low

** Tags removed: liberty-backport-potential
** Tags added: libvirt numa ppc

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
 Assignee: (unassigned) => Sudipta Biswas (sbiswas7)

** Changed in: nova/liberty
   Importance: Undecided => Low

** Changed in: nova/liberty
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1516578

Title:
  Add ppc architecture for NUMA

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress

Bug description:
  Post this commit:
  https://review.openstack.org/#/c/170780/11/nova/virt/libvirt/driver.py,

  the NUMA topology reporting depends on the host architecture.

  The current check includes arch.I686, arch.X86_64 only.

  This bug is filed to include the ppc64/ppc64le architectures included
  to the list, since they use the same libvirt driver for KVM based
  systems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1516578/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288438] Re: Neutron server takes a long time to recover from VIP move

2016-03-04 Thread Jorge Niedbalski
** Changed in: neutron (Ubuntu Trusty)
 Assignee: (unassigned) => Mario Splivalo (mariosplivalo)

** Changed in: neutron (Ubuntu)
   Importance: Undecided => Medium

** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Trusty)
   Status: New => In Progress

** Changed in: neutron (Ubuntu Trusty)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1288438

Title:
  Neutron server takes a long time to recover from VIP move

Status in Fuel for OpenStack:
  Fix Committed
Status in neutron:
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Trusty:
  In Progress

Bug description:
  Neutron waits sequentially for read_timeout seconds for each
  connection in its connection pool. The default pool_size is 10 so it
  takes 10 minutes for Neutron server to be available after the VIP is
  moved.

  This is log output from neutron-server after the VIP has been moved:
  2014-03-05 17:48:23.844 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:49:23.887 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:50:24.055 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:51:24.067 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:52:24.079 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:53:24.115 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:54:24.123 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:55:24.131 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:56:24.143 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:57:24.163 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')

  Here is the log output after the pool_size was changed to 7 and the 
read_timeout to 30.
  2014-03-05 18:50:25.300 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:50:55.331 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:51:25.351 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:51:55.387 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:52:25.415 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:52:55.427 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:53:25.439 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:53:25.549 15731 INFO urllib3.connectionpool [-] Starting new 
HTTP connection (1): 192.168.0.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1288438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1177570] Re: Hyper-V tests can be refactored to avoid multiple mox.VerifyAll() calls

2016-03-04 Thread Claudiu Belu
Tests have been refactored during Kilo and Liberty. Final patch that
merged in Liberty: https://review.openstack.org/#/c/139798/

No longer valid.

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1177570

Title:
  Hyper-V tests can be refactored to avoid multiple mox.VerifyAll()
  calls

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  The Hyper-V tests, specifically test_hypervap.py are currently using
  the mox framework for all the tests.

  As a result, it's possible to move the mox.VerifyAll() call from the
  single tests to tearDown().

  The advantages are:

  1) Less code bloathing due to duplicated VerifyAll() calls at 
 the end of each individual test

  2) Ensure that VerifyAll() is called in cases in which the developer might
     forget about adding it at the end of the test

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1177570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553254] [NEW] neutron.tests.unit.objects.qos.test_policy.QosPolicyObjectTestCase fails if executed separately

2016-03-04 Thread Ihar Hrachyshka
Public bug reported:

The test will fail if you execute just it as in:

$ tox -e py27
neutron.tests.unit.objects.qos.test_policy.QosPolicyObjectTestCase

...

Captured traceback:
~~~
Traceback (most recent call last):
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/mock/mock.py", 
line 1305, in patched
return func(*args, **keywargs)
  File "neutron/tests/unit/objects/test_base.py", line 238, in 
test_update_no_changes
obj.update()
  File "neutron/objects/rbac_db.py", line 280, in func
return new_method(self, orig_method)
  File "neutron/objects/rbac_db.py", line 212, in _update_hook
_update_post(self)
  File "neutron/objects/rbac_db.py", line 206, in _update_post
self.update_shared(self.shared, self.id)
  File "neutron/objects/rbac_db.py", line 189, in update_shared
action=models.ACCESS_SHARED)
  File "neutron/db/api.py", line 95, in get_object
.filter_by(**kwargs)
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2634, in first
ret = list(self[0:1])
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2457, in __getitem__
return list(res)
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2736, in __iter__
return self._execute_and_instances(context)
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2751, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
return meth(self, multiparams, params)
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
 line 323, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1010, in _execute_clauseelement
compiled_sql, distilled_params
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1146, in _execute_context
context)
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1337, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/util/compat.py",
 line 200, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 1139, in _execute_context
context)
  File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/sqlalchemy/engine/default.py",
 line 450, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: 
qospolicyrbacs [SQL: u'SELECT qospolicyrbacs.tenant_id AS 
qospolicyrbacs_tenant_id, qospolicyrbacs.id AS qospolicyrbacs_id, 
qospolicyrbacs.target_tenant AS qospolicyrbacs_target_tenant, 
qospolicyrbacs.action AS qospolicyrbacs_action, qospolicyrbacs.object_id AS 
qospolicyrbacs_object_id \nFROM qospolicyrbacs \nWHERE qospolicyrbacs.object_id 
= ? AND qospolicyrbacs.action = ? AND qospolicyrbacs.target_tenant = ?\n LIMIT 
? OFFSET ?'] [parameters: ('', 'access_as_shared', '*', 1, 0)]

That's because RBAC mixin now triggers some fetches when updating
policy.

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: New

** Changed in: neutron
Milestone: None => mitaka-rc1

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553254

Title:
  neutron.tests.unit.objects.qos.test_policy.QosPolicyObjectTestCase
  fails if executed separately

Status in neutron:
  New

Bug description:
  The test will fail if you execute just it as in:

  $ tox -e py27
  neutron.tests.unit.objects.qos.test_policy.QosPolicyObjectTestCase

  ...

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File 
"/home/vagrant/git/neutron/.tox/py27/lib/python2.7/site-packages/mock/mock.py", 
line 1305, in patched
  return func(*args, **keywargs)
File "neutron/tests/unit/objects/test_base.py", line 238, in 
test_update_no_changes
  obj.update()
File "neutron/objects/rbac_db.py", line 280, in func
  return new_method(self, orig_

[Yahoo-eng-team] [Bug 1553057] Re: Unable to launch an instance due to "ERROR nova.api.openstack.extensions

2016-03-04 Thread hgangwx
Thanks Andreas. Yes it was a configuration issue. After correcting with
double slashes VM got launched.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553057

Title:
  Unable to launch an instance due to "ERROR
  nova.api.openstack.extensions

Status in neutron:
  Invalid

Bug description:
  Creation of Instance fails

  ~# nova boot --flavor 1 --image cirros --nic 
net-id=8856e7be-bce7-42e4-84ff-58edd7b26b41 TestVM
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-3d3303f9-b64a-4021-a674-4ba7f55fa030)

  Here are the nova.api logs from controller node

  2016-03-04 00:49:52.212 2848 INFO nova.osapi_compute.wsgi.server 
[req-1b295f62-f15b-45e1-99fe-a9e9af22795c 96376c21180244ca8266841b90b74275 
e195f8abb6bb4e778f6afb73aeb8bb74 - - -] 10.140.33.254 "GET 
/v2/e195f8abb6bb4e778f6afb73aeb8bb74/flavors/1 HTTP/1.1" status: 200 len: 615 
time: 0.2387350
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions 
[req-7763e88d-a861-4890-bd7e-7feccce4394e 96376c21180244ca8266841b90b74275 
e195f8abb6bb4e778f6afb73aeb8bb74 - - -] Unexpected exception in API method
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/servers.py", line 
611, in create
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions 
**create_kwargs)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/hooks.py", line 149, in inner
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1581, in create
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 1181, in 
_create_instance
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 923, in 
_validate_and_build_base_options
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions 
requested_networks, max_count)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/compute/api.py", line 533, in 
_check_requested_networks
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions 
max_count)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1171, in 
validate_networks
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions context, 
neutron, requested_networks)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1143, in 
_ports_needed_per_instance
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions 
neutron=neutron)
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 282, in 
_get_available_networks
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions nets = 
neutron.list_networks(**search_opts).get('networks', [])
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 102, in 
with_params
  2016-03-04 00:49:53.075 2848 ERROR nova.api.openstack.extensions ret = 
self.function(instanc

[Yahoo-eng-team] [Bug 1124540] Re: swiftclient put_object needs a content_length

2016-03-04 Thread Timur Sufiev
*** This bug is a duplicate of bug 1352256 ***
https://bugs.launchpad.net/bugs/1352256

** This bug is no longer a duplicate of bug 1200534
   swiftclient put_object needs Content-Length header
** This bug has been marked a duplicate of bug 1352256
   Uploading a new object fails with Ceph as object storage backend using 
RadosGW

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1124540

Title:
  swiftclient put_object needs a content_length

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  I was unable to upload a file with the web interface and I realise that
  without a content length web servers or proxy like apache or nginx will 
return a 411 Length Required error
  even if the headers contrains the  "Transfer-Encoding: chunked" because it's 
a PUT request.

  I've attached a naive patch of what could solve the problem.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1124540/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1200534] Re: swiftclient put_object needs Content-Length header

2016-03-04 Thread Timur Sufiev
*** This bug is a duplicate of bug 1352256 ***
https://bugs.launchpad.net/bugs/1352256

This issue seems to be fixed in
https://bugs.launchpad.net/horizon/+bug/1352256

Setting this one as the duplicate, it may be older, but the fix isn't
here.

** This bug has been marked a duplicate of bug 1352256
   Uploading a new object fails with Ceph as object storage backend using 
RadosGW

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1200534

Title:
  swiftclient put_object needs Content-Length header

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  We use radosgw for our object-store. The frontend Apache Webserver
  simply needs the size of the Upload. I'll provide a patch for this.

  This may break legacy swift access and needs testing.

  Kind Regards
  Oliver

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1200534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240373] Re: VMware: Sparse glance vmdk's size property is mistaken for capacity

2016-03-04 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => In Progress

** Changed in: nova/liberty
   Importance: Undecided => Medium

** Changed in: nova/liberty
 Assignee: (unassigned) => Sven Anderson (ansiwen)

** Tags removed: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240373

Title:
  VMware: Sparse glance vmdk's size property is mistaken for capacity

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  In Progress
Status in VMwareAPI-Team:
  Confirmed

Bug description:
  Scenario:

  a sparse vmdk whose file size is 800MB and whose capacity is 4GB is uploaded 
to glance without specifying the size property.
  (glance uses the file's size for the size property in this case)

  nova boot with flavor tiny (root disk size of 1GB) said image.

  Result:
  The vmwareapi driver fails to spawn the VM because the ESX server throws a 
fault when asked to 'grow' the disk from 4GB to 1GB (driver thinks it is 
attempt to grow from 800MB to 1GB)

  Relevant hostd.log on ESX host:
  2013-10-15T17:02:24.509Z [35BDDB90 verbose 'Default'
  opID=HB-host-22@3170-d82e35d0-80] ha-license-manager:Validate -> Valid
  evaluation detected for "VMware ESX Server 5.0" (lastError=2,
  desc.IsValid:Yes)
  2013-10-15T17:02:25.129Z [FFBE3D20 info 'Vimsvc.TaskManager'
  opID=a3057d82-8e] Task Created :
  haTask--vim.VirtualDiskManager.extendVirtualDisk-526626761


  2013-10-15T17:02:25.158Z [35740B90 warning 'Vdisksvc' opID=a3057d82-8e]
  New capacity (2097152) is not greater than original capacity (8388608).

  I am still not exactly sure if this is consider user error on glance
  import, a glance shortcoming of not introspecting the vmdk, or a bug
  in the compute driver. Regardless, this bug is to track any potential
  defensive code we can add to the driver to better handle this
  scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550434] Re: vpnaas alembic migration fails for upgrade liberty

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288253
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1c67cda0f8ce5b6ff028d53cedbb792931fd96f6
Submitter: Jenkins
Branch:master

commit 1c67cda0f8ce5b6ff028d53cedbb792931fd96f6
Author: Henry Gessau 
Date:   Fri Mar 4 00:39:53 2016 -0500

Fix branch order when upgrading to alembic milestone

When using neutron-db-manage to upgrade to a milestone tag,
the script was not ensuring that the expand branch was
upgraded before the contract branch. This broke projects
where contract migrations depend on expand migrations.

Fixes-Bug: #1550434

Change-Id: I0e6fc31dfa062c689936b2fe982147335ad9dce3


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550434

Title:
  vpnaas alembic migration fails for upgrade liberty

Status in neutron:
  Fix Released

Bug description:
  With the mitaka version of neutron and vpnaas installed:

  $ NDBM --subproject neutron-vpnaas upgrade liberty
  No handlers could be found for logger "oslo_config.cfg"
  INFO  [alembic.runtime.migration] Context impl MySQLImpl.
  INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Running upgrade (contract) for neutron-vpnaas ...
  INFO  [alembic.runtime.migration] Context impl MySQLImpl.
  INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  INFO  [alembic.runtime.migration] Running upgrade  -> start_neutron_vpnaas, 
start neutron-vpnaas chain
  INFO  [alembic.runtime.migration] Running upgrade start_neutron_vpnaas -> 
3ea02b2a773e, add_index_tenant_id
  INFO  [alembic.runtime.migration] Running upgrade 3ea02b2a773e -> kilo, kilo
  INFO  [alembic.runtime.migration] Running upgrade kilo -> 30018084ed99, 
Initial no-op Liberty expand rule.
  INFO  [alembic.runtime.migration] Running upgrade 30018084ed99 -> 
24f28869838b, Add fields to VPN service table
  INFO  [alembic.runtime.migration] Running upgrade kilo -> 5689aa52, fix 
identifier map fk
  INFO  [alembic.runtime.migration] Running upgrade 5689aa52, 24f28869838b 
-> 333dfd6afaa2, Populate VPN service table fields
  INFO  [alembic.runtime.migration] Running upgrade 333dfd6afaa2 -> 
2c82e782d734, drop_tenant_id_in_cisco_csr_identifier_map
OK
  INFO  [alembic.runtime.migration] Context impl MySQLImpl.
  INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  Traceback (most recent call last):
File "/home/henry/Dev/neutron-vpnaas/.tox/pep8/bin/neutron-db-manage", line 
10, in 
  sys.exit(main())
File 
"/home/henry/Dev/neutron-vpnaas/.tox/pep8/src/neutron/neutron/db/migration/cli.py",
 line 744, in main
  return_val |= bool(CONF.command.func(config, CONF.command.name))
File 
"/home/henry/Dev/neutron-vpnaas/.tox/pep8/src/neutron/neutron/db/migration/cli.py",
 line 218, in do_upgrade
  run_sanity_checks(config, revision)
File 
"/home/henry/Dev/neutron-vpnaas/.tox/pep8/src/neutron/neutron/db/migration/cli.py",
 line 726, in run_sanity_checks
  script_dir.run_env()
File 
"/home/henry/Dev/neutron-vpnaas/.tox/pep8/local/lib/python2.7/site-packages/alembic/script/base.py",
 line 397, in run_env
  util.load_python_file(self.dir, 'env.py')
File 
"/home/henry/Dev/neutron-vpnaas/.tox/pep8/local/lib/python2.7/site-packages/alembic/util/pyfiles.py",
 line 81, in load_python_file
  module = load_module_py(module_id, path)
File 
"/home/henry/Dev/neutron-vpnaas/.tox/pep8/local/lib/python2.7/site-packages/alembic/util/compat.py",
 line 79, in load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/home/henry/Dev/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/env.py",
 line 87, in 
  run_migrations_online()
File 
"/home/henry/Dev/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/env.py",
 line 78, in run_migrations_online
  context.run_migrations()
File "", line 8, in run_migrations
File 
"/home/henry/Dev/neutron-vpnaas/.tox/pep8/local/lib/python2.7/site-packages/alembic/runtime/environment.py",
 line 797, in run_migrations
  self.get_context().run_migrations(**kw)
File 
"/home/henry/Dev/neutron-vpnaas/.tox/pep8/local/lib/python2.7/site-packages/alembic/runtime/migration.py",
 line 303, in run_migrations
  for step in self._migrations_fn(heads, self):
File 
"/home/henry/Dev/neutron-vpnaas/.tox/pep8/src/neutron/neutron/db/migration/cli.py",
 line 717, in check_sanity
  revision, rev, implicit_base=True):
File 
"/home/henry/Dev/neutron-vpnaas/.tox/pep8/local/lib/python2.7/site-packages/alembic/script/revision.py",
 line 664, in _iterate_revisions
  raise RangeNotAncestorError(lower, upper)
  alembic.script.revision.RangeNotAncestorError: Revision (u'2c82e782d734',) is 
not an ancestor of revision 24f28869838b

To manage notifications a

[Yahoo-eng-team] [Bug 1553319] [NEW] When CPU metric collection fails, stack trace not in nova logs

2016-03-04 Thread Joe Cropper
Public bug reported:

When the resource tracker tries to collect metric data, if something
goes wrong, the stack trace isn't shown and it masks the underlying
problem and makes debug difficult.

For example, here's the message you see...

2016-03-04 13:45:02.582 31225 WARNING nova.compute.resource_tracker
[req-141a8c26-fa98-470d-accb-97b15bf98d70 - - - - -] Cannot get the
metrics from...

...and no stack trace.

** Affects: nova
 Importance: Undecided
 Assignee: Joe Cropper (jwcroppe)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Joe Cropper (jwcroppe)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1553319

Title:
  When CPU metric collection fails, stack trace not in nova logs

Status in OpenStack Compute (nova):
  New

Bug description:
  When the resource tracker tries to collect metric data, if something
  goes wrong, the stack trace isn't shown and it masks the underlying
  problem and makes debug difficult.

  For example, here's the message you see...

  2016-03-04 13:45:02.582 31225 WARNING nova.compute.resource_tracker
  [req-141a8c26-fa98-470d-accb-97b15bf98d70 - - - - -] Cannot get the
  metrics from...

  ...and no stack trace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1553319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553314] [NEW] Attempting to create a volume after deleting one fails

2016-03-04 Thread Rob Cresswell
Public bug reported:

This is a fun bug.

1) Create Volume, follow process until volume is created
2) Delete volume from step 1.
3) Click "Create Volume" again. The modal spinner appears briefly, then 
disappears and the console has the following error:

jquery.js:8706 GET 
http://localhost:8000/project/volumes/?action=row_update&table=volumes&obj_id=9f1c9086-6086-491b-8826-502e34abcf00
 404 (NOT FOUND)
send @ jquery.js:8706
jQuery.extend.ajax @ jquery.js:8136
request @ horizon.communication.js:32
horizon.ajax.next @ horizon.communication.js:48
horizon.ajax.queue @ horizon.communication.js:39
(anonymous function) @ horizon.tables.js:28
jQuery.extend.each @ jquery.js:657
jQuery.fn.jQuery.each @ jquery.js:266
horizon.datatables.update @ horizon.tables.js:25
horizon.addInitFunction.horizon.datatables.init @ horizon.tables.js:643
horizon.init @ horizon.js:24
fire @ jquery.js:3048
self.fireWith @ jquery.js:3160
jQuery.extend.ready @ jquery.js:433
completed @ jquery.js:104

** Affects: horizon
 Importance: High
 Status: New

** Changed in: horizon
Milestone: None => mitaka-rc1

** Changed in: horizon
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553314

Title:
  Attempting to create a volume after deleting one fails

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  This is a fun bug.

  1) Create Volume, follow process until volume is created
  2) Delete volume from step 1.
  3) Click "Create Volume" again. The modal spinner appears briefly, then 
disappears and the console has the following error:

  jquery.js:8706 GET 
http://localhost:8000/project/volumes/?action=row_update&table=volumes&obj_id=9f1c9086-6086-491b-8826-502e34abcf00
 404 (NOT FOUND)
  send @ jquery.js:8706
  jQuery.extend.ajax @ jquery.js:8136
  request @ horizon.communication.js:32
  horizon.ajax.next @ horizon.communication.js:48
  horizon.ajax.queue @ horizon.communication.js:39
  (anonymous function) @ horizon.tables.js:28
  jQuery.extend.each @ jquery.js:657
  jQuery.fn.jQuery.each @ jquery.js:266
  horizon.datatables.update @ horizon.tables.js:25
  horizon.addInitFunction.horizon.datatables.init @ horizon.tables.js:643
  horizon.init @ horizon.js:24
  fire @ jquery.js:3048
  self.fireWith @ jquery.js:3160
  jQuery.extend.ready @ jquery.js:433
  completed @ jquery.js:104

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551836] Re: CORS middleware's latent configuration options need to change

2016-03-04 Thread Junyuan Leng
** Changed in: ironic
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551836

Title:
  CORS middleware's latent configuration options need to change

Status in Aodh:
  In Progress
Status in Barbican:
  In Progress
Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in cloudkitty:
  In Progress
Status in congress:
  In Progress
Status in Cue:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  In Progress
Status in Mistral:
  In Progress
Status in Murano:
  In Progress
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.config:
  Fix Released
Status in Sahara:
  In Progress
Status in OpenStack Search (Searchlight):
  In Progress
Status in Solum:
  In Progress
Status in Trove:
  In Progress

Bug description:
  It was pointed out in http://lists.openstack.org/pipermail/openstack-
  dev/2016-February/086746.html that configuration options included in
  paste.ini are less than optimal, because they impose an upgrade burden
  on both operators and engineers. The following discussion expanded to
  all projects (not just those using paste), and the following
  conclusion was reached:

  A) All generated configuration files should contain any headers which the API 
needs to operate. This is currently supported in oslo.config's generate-config 
script, as of 3.7.0
  B) These same configuration headers should be set as defaults for the given 
API, using cfg.set_defaults. This permits an operator to simply activate a 
domain, and not have to worry about tweaking additional settings.
  C) All hardcoded headers should be detached from the CORS middleware.
  D) Configuration and activation of CORS should be consistent across all 
projects.

  It was also agreed that this is a blocking bug for mitaka. A reference
  patch has already been approved for keystone, available here:
  https://review.openstack.org/#/c/285308/

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1551836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553330] [NEW] Example configs needs to be synced for Mitaka

2016-03-04 Thread Erno Kuvaja
Public bug reported:

Example configs needs to be synced from configgenerator for Mitaka
release.

** Affects: glance
 Importance: Medium
 Assignee: Niall Bunting (niall-bunting)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1553330

Title:
  Example configs needs to be synced for Mitaka

Status in Glance:
  In Progress

Bug description:
  Example configs needs to be synced from configgenerator for Mitaka
  release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1553330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479214] Re: nova can't attach volume to specific device name

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/280391
Committed: 
https://git.openstack.org/cgit/openstack/api-site/commit/?id=bc38fb73477a4ac4d031dc3228c4956a6c083f5d
Submitter: Jenkins
Branch:master

commit bc38fb73477a4ac4d031dc3228c4956a6c083f5d
Author: Matt Riedemann 
Date:   Mon Feb 15 12:42:41 2016 -0800

Note that nova libvirt driver no longer honors device name on volume attach

Commit 0283234e837d9faf807e6e8da6ec6321ee56b31a in Liberty changed the
nova libvirt driver to no longer honor user-supplied device names on the
volume attachment request.

This change updates the API docs to add a note about so users are aware
that if they know they are hitting a libvirt-managed compute, device names
for volume attachment are auto-generated.

Change-Id: I42fbc5645414af99fedc49d5299c1e15d619d5bd
Closes-Bug: #1479214


** Changed in: openstack-api-site
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479214

Title:
  nova can't attach volume to specific device name

Status in OpenStack Compute (nova):
  Won't Fix
Status in openstack-api-site:
  Fix Released

Bug description:
  Nova attach volume cli support one option named device, it can specify this 
volume  where to mount.
  But it doesn't work. Volume will be attached to device which is determined by 
nova compute.

  Maybe this bug was caused at following code:
  
https://github.com/openstack/nova/blob/c5db407bb22e453a4bca22de1860bb6ce6090782/nova/virt/libvirt/driver.py#L6823
  It will ignore device name which user assign, then auto select disk from 
blockinfo.

  My nova git environment is 
  nova: 14d00296b179fcf115cf13d37b2f0b5b734d298d

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549032] Re: max_net_count doesn't interact properly with min_count when booting multiple instances

2016-03-04 Thread Matt Riedemann
** Tags added: api

** Changed in: nova
   Importance: Undecided => Medium

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Tags added: liberty-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1549032

Title:
  max_net_count doesn't interact properly with min_count when booting
  multiple instances

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  In compute.api.API._create_instance() we have a min_count that is
  optionally passed in by the end user as part of the boot request.

  We calculate max_net_count based on networking constraints.

  Currently we error out if max_net_count is zero, but we don't check it
  against min_count.  If the end user specifies a min_count that is
  greater than the calculated  max_net_count the resulting error isn't
  very useful.

  We know that min_count is guaranteed to be at least 1, so we can
  replace the existing test against zero with one against min_count.
  Doing this gives a much more reasonable error message:

  controller-0:~$ nova boot --image myimage --flavor simple --min-count 2 
--max-count 3 test
  ERROR (Forbidden): Maximum number of ports exceeded (HTTP 403) (Request-ID: 
req-f7ff28bf-5708-4cbf-a634-2e9686afd970)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1549032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489059] Re: "db type could not be determined" running py34

2016-03-04 Thread Tom Barron
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) => Tom Barron (tpb)

** Changed in: cinder
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1489059

Title:
  "db type could not be determined" running py34

Status in Aodh:
  Fix Released
Status in Barbican:
  Fix Released
Status in Bareon:
  Fix Released
Status in Cinder:
  In Progress
Status in cloudkitty:
  Fix Committed
Status in Fuel for OpenStack:
  In Progress
Status in Glance:
  Fix Committed
Status in glance_store:
  Fix Committed
Status in hacking:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-lib:
  Fix Committed
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in kolla:
  Fix Released
Status in Manila:
  Fix Released
Status in Murano:
  Fix Committed
Status in networking-midonet:
  Fix Released
Status in networking-ofagent:
  Fix Released
Status in neutron:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in python-muranoclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in python-swiftclient:
  In Progress
Status in Rally:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released
Status in senlin:
  Fix Released
Status in tap-as-a-service:
  Fix Released
Status in tempest:
  Fix Released
Status in zaqar:
  Fix Released
Status in python-ironicclient package in Ubuntu:
  Fix Committed

Bug description:
  When running tox for the first time, the py34 execution fails with an
  error saying "db type could not be determined".

  This issue is know to be caused when the run of py27 preceeds py34 and
  can be solved erasing the .testrepository and running "tox -e py34"
  first of all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1489059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397903] Re: Hardcoded initial database version

2016-03-04 Thread OpenStack Infra
** Changed in: keystone
   Status: Opinion => In Progress

** Changed in: keystone
 Assignee: (unassigned) => Sean Perry (sean-perry-a)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1397903

Title:
  Hardcoded initial database version

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Migration repositories provide hardcoded initial version value or even
  missing it at all. Need to provide single automated tool to get real
  initial version from any migration repo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1397903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553345] [NEW] Chef gem installer fails on ubuntu 14.04

2016-03-04 Thread ryan vanniekerk
Public bug reported:

Running the "gem" version of the chef install fails on the latest Ubuntu
server 14.04 LTS AMI (ami-fce3c696).

Here is part of my user-data for cloud init:

bootcmd:
  - apt-get update && apt-get upgrade cloud-init
  - apt-get install build-essential
  - apt-get install -q -y <%= find_in_map("SoftwarePropertiesPackage", 
ref("LCOSVersion"), "PackageName") %>
  - apt-add-repository -y ppa:brightbox/ruby-ng
  - echo "Updating package lists..."
  - apt-get update -qq
  - echo "Installing ruby..."
  - apt-get install -q -y ruby<%= find_in_map("RubyVersionToPackageInfo", 
ref("AppRubyVersion"), "Version") %>
  - apt-get install -q -y ruby<%= find_in_map("RubyVersionToPackageInfo", 
ref("AppRubyVersion"), "Version") %>-dev
  - update-alternatives --set ruby /usr/bin/ruby<%= 
find_in_map("RubyVersionToPackageInfo", ref("AppRubyVersion"), "Version") %>
  - update-alternatives --set gem /usr/bin/gem<%= 
find_in_map("RubyVersionToPackageInfo", ref("AppRubyVersion"), "Version") %>
  - echo "Updating rubygems to latest version..."
  - gem update --system --no-rdoc --no-ri
chef:
  install_type: gems
  version: <%= ref("ChefVersion") %>
...

Here is the output from cloud-init

Mar  4 18:00:04 ip-xxx[CLOUDINIT] util.py[DEBUG]: Running chef
() failed#012Traceback (most
recent call last):#012  File "/usr/lib/python2.7/dist-
packages/cloudinit/stages.py", line 658, in _run_modules#012
cc.run(run_name, mod.handle, func_args, freq=freq)#012  File
"/usr/lib/python2.7/dist-packages/cloudinit/cloud.py", line 63, in
run#012return self._runners.run(name, functor, args, freq,
clear_on_fail)#012  File "/usr/lib/python2.7/dist-
packages/cloudinit/helpers.py", line 197, in run#012results =
functor(*args)#012  File "/usr/lib/python2.7/dist-
packages/cloudinit/config/cc_chef.py", line 99, in handle#012
install_chef_from_gems(cloud.distro, ruby_version, chef_version)#012
File "/usr/lib/python2.7/dist-packages/cloudinit/config/cc_chef.py",
line 128, in install_chef_from_gems#012
distro.install_packages(get_ruby_packages(ruby_version))#012AttributeError:
'str' object has no attribute 'install_packages'

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: chef

** Tags added: chef

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1553345

Title:
  Chef gem installer fails on ubuntu 14.04

Status in cloud-init:
  New

Bug description:
  Running the "gem" version of the chef install fails on the latest
  Ubuntu server 14.04 LTS AMI (ami-fce3c696).

  Here is part of my user-data for cloud init:

  bootcmd:
- apt-get update && apt-get upgrade cloud-init
- apt-get install build-essential
- apt-get install -q -y <%= find_in_map("SoftwarePropertiesPackage", 
ref("LCOSVersion"), "PackageName") %>
- apt-add-repository -y ppa:brightbox/ruby-ng
- echo "Updating package lists..."
- apt-get update -qq
- echo "Installing ruby..."
- apt-get install -q -y ruby<%= find_in_map("RubyVersionToPackageInfo", 
ref("AppRubyVersion"), "Version") %>
- apt-get install -q -y ruby<%= find_in_map("RubyVersionToPackageInfo", 
ref("AppRubyVersion"), "Version") %>-dev
- update-alternatives --set ruby /usr/bin/ruby<%= 
find_in_map("RubyVersionToPackageInfo", ref("AppRubyVersion"), "Version") %>
- update-alternatives --set gem /usr/bin/gem<%= 
find_in_map("RubyVersionToPackageInfo", ref("AppRubyVersion"), "Version") %>
- echo "Updating rubygems to latest version..."
- gem update --system --no-rdoc --no-ri
  chef:
install_type: gems
version: <%= ref("ChefVersion") %>
  ...

  Here is the output from cloud-init

  Mar  4 18:00:04 ip-xxx[CLOUDINIT] util.py[DEBUG]: Running chef
  () failed#012Traceback (most
  recent call last):#012  File "/usr/lib/python2.7/dist-
  packages/cloudinit/stages.py", line 658, in _run_modules#012
  cc.run(run_name, mod.handle, func_args, freq=freq)#012  File
  "/usr/lib/python2.7/dist-packages/cloudinit/cloud.py", line 63, in
  run#012return self._runners.run(name, functor, args, freq,
  clear_on_fail)#012  File "/usr/lib/python2.7/dist-
  packages/cloudinit/helpers.py", line 197, in run#012results =
  functor(*args)#012  File "/usr/lib/python2.7/dist-
  packages/cloudinit/config/cc_chef.py", line 99, in handle#012
  install_chef_from_gems(cloud.distro, ruby_version, chef_version)#012
  File "/usr/lib/python2.7/dist-packages/cloudinit/config/cc_chef.py",
  line 128, in install_chef_from_gems#012
  distro.install_packages(get_ruby_packages(ruby_version))#012AttributeError:
  'str' object has no attribute 'install_packages'

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1553345/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help  

[Yahoo-eng-team] [Bug 1553374] [NEW] Intermittent failure in BGP API tests

2016-03-04 Thread Ryan Tidwell
Public bug reported:

Seeing the following failure intermittently in API test jobs:

http://paste.openstack.org/show/489400/

In the 2 failed jobs [1][2] I've analyzed,
test_get_advertised_routes_floating_ips() runs before this failed test.
test_get_advertised_routes_null_address_scope() assumes no floating IP
associations and no address scopes so that it can assert that no routes
are being announced by a bgp_speaker.  My theory is that the floating IP
created in test_get_advertised_routes_floating_ips() doesn't get cleaned
up fast enough, and test_get_advertised_routes_null_address_scope() runs
before the floating IP from the previous test has actually been cleaned
up.

[1] 
http://logs.openstack.org/85/267985/9/check/gate-neutron-dsvm-api/b09db1c/console.html
[2] 
http://logs.openstack.org/76/282876/17/check/gate-neutron-dsvm-api/b49cae9/console.html

** Affects: neutron
 Importance: High
 Assignee: Ryan Tidwell (ryan-tidwell)
 Status: Confirmed


** Tags: gate-failure

** Changed in: neutron
 Assignee: (unassigned) => Ryan Tidwell (ryan-tidwell)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553374

Title:
  Intermittent failure in BGP API tests

Status in neutron:
  Confirmed

Bug description:
  Seeing the following failure intermittently in API test jobs:

  http://paste.openstack.org/show/489400/

  In the 2 failed jobs [1][2] I've analyzed,
  test_get_advertised_routes_floating_ips() runs before this failed
  test.  test_get_advertised_routes_null_address_scope() assumes no
  floating IP associations and no address scopes so that it can assert
  that no routes are being announced by a bgp_speaker.  My theory is
  that the floating IP created in
  test_get_advertised_routes_floating_ips() doesn't get cleaned up fast
  enough, and test_get_advertised_routes_null_address_scope() runs
  before the floating IP from the previous test has actually been
  cleaned up.

  [1] 
http://logs.openstack.org/85/267985/9/check/gate-neutron-dsvm-api/b09db1c/console.html
  [2] 
http://logs.openstack.org/76/282876/17/check/gate-neutron-dsvm-api/b49cae9/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1553374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552897] Re: Unit test failure when buildnig debian package for Mitaka b3 if dogpile.cache is not 0.5.7

2016-03-04 Thread Davanum Srinivas (DIMS)
Fixed now https://review.openstack.org/#/c/288474/

** Changed in: oslo.cache
   Status: New => Fix Released

** Changed in: oslo.cache
   Importance: Undecided => High

** Changed in: oslo.cache
 Assignee: (unassigned) => Davanum Srinivas (DIMS) (dims-v)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1552897

Title:
  Unit test failure when buildnig debian package for Mitaka b3 if
  dogpile.cache is not 0.5.7

Status in OpenStack Compute (nova):
  New
Status in oslo.cache:
  Fix Released

Bug description:
  When building the Debian package of Nova for Mitaka b3 (ie: Nova
  13.0.0~b3), I get the below unit test failures. Please help me to fix
  this. I'm available on IRC if you need more details and a way to
  reproduce (but basically, try to build the package in Sid +
  Experimental using the sources from git clone
  git://git.debian.org/git/openstack/nova.git).

  ==
  FAIL: nova.tests.unit.test_cache.TestOsloCache.test_get_client
  nova.tests.unit.test_cache.TestOsloCache.test_get_client
  --
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 1305, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/test_cache.py", line 64, in test_get_client
  expiration_time=60)],
File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 969, in 
assert_has_calls
  ), cause)
File "/usr/lib/python2.7/dist-packages/six.py", line 718, in raise_from
  raise value
  AssertionError: Calls not found.
  Expected: [call('oslo_cache.dict', arguments={'expiration_time': 60}, 
expiration_time=60), call('dogpile.cache.memcached', arguments={'url': 
['localhost:11211']}, expiration_time=60), call('dogpile.cache.null', 
_config_argument_dict=, _config_prefix='cache.oslo.arguments.', 
expiration_time=60, wrap=None), call('oslo_cache.dict', 
arguments={'expiration_time': 60}, expiration_time=60)]
  Actual: [call('oslo_cache.dict', arguments={'expiration_time': 60}, 
expiration_time=60),
   call('dogpile.cache.memcached', arguments={'url': ['localhost:11211']}, 
expiration_time=60),
   call('dogpile.cache.null', 
_config_argument_dict={'cache.oslo.arguments.pool_maxsize': 10, 
'cache.oslo.arguments.pool_unused_timeout': 60, 'cache.oslo.arguments.url': 
['localhost:11211'], 'cache.oslo.arguments.socket_timeout': 3, 
'cache.oslo.expiration_time': 60, 'cache.oslo.arguments.dead_retry': 300, 
'cache.oslo.arguments.pool_connection_get_timeout': 10, 'cache.oslo.backend': 
'dogpile.cache.null'}, _config_prefix='cache.oslo.arguments.', 
expiration_time=60),
   call('oslo_cache.dict', arguments={'expiration_time': 60}, 
expiration_time=60)]

  ==
  FAIL: nova.tests.unit.test_cache.TestOsloCache.test_get_memcached_client
  nova.tests.unit.test_cache.TestOsloCache.test_get_memcached_client
  --
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 1305, in patched
  return func(*args, **keywargs)
File "nova/tests/unit/test_cache.py", line 120, in test_get_memcached_client
  expiration_time=60, wrap=None)]
File "/usr/lib/python2.7/dist-packages/mock/mock.py", line 969, in 
assert_has_calls
  ), cause)
File "/usr/lib/python2.7/dist-packages/six.py", line 718, in raise_from
  raise value
  AssertionError: Calls not found.
  Expected: [call('dogpile.cache.memcached', arguments={'url': 
['localhost:11211']}, expiration_time=60), call('dogpile.cache.memcached', 
arguments={'url': ['localhost:11211']}, expiration_time=60), 
call('dogpile.cache.null', _config_argument_dict=, 
_config_prefix='cache.oslo.arguments.', expiration_time=60, wrap=None)]
  Actual: [call('dogpile.cache.memcached', arguments={'url': 
['localhost:11211']}, expiration_time=60),
   call('dogpile.cache.memcached', arguments={'url': ['localhost:11211']}, 
expiration_time=60),
   call('dogpile.cache.null', 
_config_argument_dict={'cache.oslo.arguments.pool_maxsize': 10, 
'cache.oslo.arguments.pool_unused_timeout': 60, 'cache.oslo.arguments.url': 
['localhost:11211'], 'cache.oslo.arguments.socket_timeout': 3, 
'cache.oslo.expiration_time': 60, 'cache.oslo.arguments.dead_retry': 300, 
'cache.oslo.arguments.pool_connection_get_timeout': 10, 'cache.oslo.backend': 
'dogpile.cache.null'}, _config_prefix='cache.oslo.arguments.', 
expiration_time=60)]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1552897/+subscriptions

-- 
Mailing 

[Yahoo-eng-team] [Bug 1553144] Re: When some instance aren't deleted correctly and libvirt still keep the domain for the instance, the resource tracker will failed to update available resource

2016-03-04 Thread Matt Riedemann
*** This bug is a duplicate of bug 1416132 ***
https://bugs.launchpad.net/bugs/1416132

Actually I think this is already fixed:

https://review.openstack.org/#/c/221162/

Now it checks:

if guest.uuid in local_instances:

** Tags added: compute libvirt

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Tags added: liberty-backport-potential

** This bug has been marked a duplicate of bug 1416132
   _get_instance_disk_info fails to read files from NFS due to permissions

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1553144

Title:
  When some instance aren't deleted correctly and libvirt still keep the
  domain for the instance, the resource tracker will failed to update
  available resource

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) liberty series:
  New

Bug description:
  When instance was deleted in the db, but it is still at compute node,
  the resource tracker will fail to update available resource.

  
  2016-03-04 10:58:28.143 ERROR nova.compute.manager 
[req-d2f1c99b-0e81-4b6d-9361-a40bd2218141 None None] Error updating resources 
for node vm6.

  
  2016-03-04 10:58:28.143 TRACE nova.compute.manager Traceback (most recent 
call last):
  2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 6379, in 
update_available_resource
  2016-03-04 10:58:28.143 TRACE nova.compute.manager 
rt.update_available_resource(context)
  2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/resource_tracker.py", line 491, in 
update_available_resource
  2016-03-04 10:58:28.143 TRACE nova.compute.manager resources = 
self.driver.get_available_resource(self.nodename)
  2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 5414, in 
get_available_resource
  2016-03-04 10:58:28.143 TRACE nova.compute.manager disk_over_committed = 
self._get_disk_over_committed_size_total()
  2016-03-04 10:58:28.143 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 7047, in 
_get_disk_over_committed_size_total
  2016-03-04 10:58:28.143 TRACE nova.compute.manager 
local_instances[guest.uuid], bdms[guest.uuid])
  2016-03-04 10:58:28.143 TRACE nova.compute.manager KeyError: 
'49505c88-b38a-4100-ab56-97958b48b533'
  2016-03-04 10:58:28.143 TRACE nova.compute.manager


  The available resource won't get update until periodic_task
  '_cleanup_running_deleted_instances' if
  running_deleted_instance_action is 'reap'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1553144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553152] Re: misleading API documentation for block_device_mapping_v2

2016-03-04 Thread Anne Gentle
Adding openstack-api-site as the openstack/api-site repo is where this
type of info should go. It's the source for
http://developer.openstack.org/api-ref-compute-v2.1.html#createServer.

Here's the file to add the JSON schema info, parameter by parameter I
believe:

https://github.com/openstack/api-site/blob/master/api-ref/src/wadls
/compute-api/src/v2.1/wadl/servers-v2.1.wadl

** Also affects: openstack-api-site
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1553152

Title:
  misleading API documentation for block_device_mapping_v2

Status in OpenStack Compute (nova):
  Confirmed
Status in openstack-api-site:
  New

Bug description:
  Documentation [1] about `block_device_mapping_v2` when creating a
  server instance is misleading as it doesn't explain that it must
  actually be an array of mappings and there is no complete list of the
  supported keys. For example `volume_size` and `uuid` are not even
  mentioned.

  Thanks to an unrelated github bug [2] I figured it's something like this:
  "block_device_mapping_v2": [
{
  "boot_index": "0",
  "uuid": "ac408821-c95a-448f-9292-73986c790911",
  "source_type": "image",
  "volume_size": "25",
  "destination_type": "volume",
  "delete_on_termination": true
}

  The above example is something that very quickly gets you to the
  point. In block_device_mapping.rst doc I see some of the things
  explained but first I could only find that doc grepping nova's sources
  and I still couldn't figure from that doc how in hell should I
  construct my API call.

  What I wanted to do is to basically launch an instance off a new
  custom sized volume. That turned out very easy and conscious
  eventually but finding that out took hours for me as I'm simply an API
  user and I have no experience whatsoever installing, configuring, even
  less hacking on OpenStack.

  P.S. I'm using a similar feature in GCE. They have it even nicer. When
  you specify the instance disks, it supports any options that are
  supported by the api call creating a standalone disk. I guess values
  are then passed to the disk api as is. Might be worth considering for
  a future API version. e.g. at the moment I can't specify a name for
  the new volume or many of the other options supported by the OS
  volumes API.

  [1] http://developer.openstack.org/api-ref-compute-v2.1.html#createServer
  [2] 
https://github.com/ggiamarchi/vagrant-openstack-provider/issues/209#issuecomment-73961050

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1553152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505476] Re: when live-migrate failed, remove_volume_connection function accept incorrect arguments order in kilo

2016-03-04 Thread Davanum Srinivas (DIMS)
** Changed in: nova
   Status: In Progress => Fix Committed

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1505476

Title:
  when live-migrate failed,remove_volume_connection function  accept
  incorrect arguments order  in kilo

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Openstack Version : kilo 2015.1.0

  Reproduce steps:

  please see the paths of codes:openstack/nova/nova/compute/manager.py

  def _rollback_live_migration(self, context, instance,dest,
  block_migration, migrate_data=None):

  ..
  for bdm in bdms:
  if bdm.is_volume:
  self.compute_rpcapi.remove_volume_connection(
  context, instance, bdm.volume_id, dest)
  ..
   
  Actual result:

  def remove_volume_connection(self, context, volume_id, instance):
  ..
  ..

  Expected result:

  def remove_volume_connection(self, context, instance, volume_id):

  
  pelease check this bug , thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1505476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543025] Re: Wrong UTC zoneinfo in cloud-images

2016-03-04 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1176-0ubuntu1

---
cloud-init (0.7.7~bzr1176-0ubuntu1) xenial; urgency=medium

  * d/README.source, d/new-upstream-snapshot: simplify the README.source
with a script.
  * d/rules: support DEB_BUILD_OPTIONS=nocheck and remove unused code.
  * d/rules: make tests with python3
  * d/control: add pep8 as a build depends
  * d/cloud-init.preinst, d/cloud-init.postinst adjust upgrade path
to adjust systemd jobs that put cloud-init unit jobs directly
in multi-user.target.
  * New upstream snapshot.
* Add Image Customization Parser for VMware vSphere Hypervisor Support.
  Disabled by default. [Sankar Tanguturi]
* lxd: add initial support for setting up lxd using 'lxd init'
* Handle escaped quotes in WALinuxAgentShim.find_endpoint (LP: #1488891)
* timezone: use a symlink when updating /etc/localtime (LP: #1543025)
* enable more code testing in 'make check'
* Added Bigstep datasource [Daniel Watkins]
* Enable password changing via a hashed string [Alex Sirbu]

 -- Scott Moser   Fri, 04 Mar 2016 15:44:02 -0500

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1543025

Title:
  Wrong UTC zoneinfo in cloud-images

Status in cloud-init:
  Triaged
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released

Bug description:
  ADT runs use cloud-images to create test VM environments. For the Xenial 
cloud-images I observed a weird issue where libvirt suddenly fails its 
build-time tests on a time offset test on UTC.
  Looking at the prepared image (cloud-init did already run there), I found 
that indeed a command-line of

  TZ=UTC date

  reports a CET based time. Looking further this seems to drill down
  into

  /usr/share/zoneinfo/UTC -> Zulu

  and that (Zulu another term for UTC) Zulu file looks quite bigger that
  the same on other hosts and contains the CET string as well (normal
  ~128b, wrong size 2335). Forcing a reinstall of tzdata will fix the
  file and also allows the libvirt test to pass.

  So I am not sure this is wrong in the initial image base or gets in
  some way broken during cloud-init. Thats why I start reporting it
  against cloud-init.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1543025/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550023] Re: Really bad workflow bug in top of tree

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/284973
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=8a4aa96d7c0b0902acd23160ecf917431d5d7005
Submitter: Jenkins
Branch:master

commit 8a4aa96d7c0b0902acd23160ecf917431d5d7005
Author: David Medberry 
Date:   Thu Feb 25 15:41:09 2016 -0700

Don't force people to security groups after they add a FIP

Horizon should not move people from the instances page to
the security groups page after adding a FIP. It's already
too late to prevent bad things from happening and this
implies that it is not too late.

Change-Id: I03796253fc6b6c56572c6e841f3ce3102c9c6cdd
Closes-bug: #1550023


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1550023

Title:
  Really bad workflow bug in top of tree

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The associate floating ip success now takes you to a completely different web 
page than you started.
  This is a fundamentally flawed, broken, bad assumption.

  Ref:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/access_and_security/floating_ips/workflows.py#L148

  It was probably done with the idea that you might want to change/check
  your sec group after the fact but since it's already too late now,
  don't do that

  ALso, really breaks the stream of consciousness of working in

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1550023/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552898] Re: In the material theme, containers sit above the navigation (hamburger menu)

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288117
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=7c56d99fc1d010b592808a3c68410aff5dd204f4
Submitter: Jenkins
Branch:master

commit 7c56d99fc1d010b592808a3c68410aff5dd204f4
Author: woodm1979 
Date:   Thu Mar 3 13:37:41 2016 -0700

Hamburger navigation now sits above containers

In the material theme, if the screen is too narrow, the navigation pane
becomes a collapsable "hamburger-menu" style button.  Currently, on the
containers page, when that button is pressed the navigation pane is
below the containers list.  This makes navigation impossible.

See screenshot here: http://i.imgur.com/uzdX66E.png

Adjusting the z-index of the container down from 10 to 2 aleviates this
issue.  Obviously the containers page is being reworked elsewhere; so
keeping this change as small as possible is appropriate.

Change-Id: I220d2f8a87e9a55b95884251b3368abe1167cafa
Closes-Bug: 1552898
Partially-implements: blueprint horizon-theme-css-reorg


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1552898

Title:
  In the material theme, containers sit above the navigation (hamburger
  menu)

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  http://i.imgur.com/uzdX66E.png

  This make it so that navigation is impossible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1552898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549869] Re: Glance should return 204 when user downloads queued image file

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254334
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=3077339f9fcd23ba0f7667571f885353fb03d7e1
Submitter: Jenkins
Branch:master

commit 3077339f9fcd23ba0f7667571f885353fb03d7e1
Author: Stuart McLaren 
Date:   Mon Dec 7 17:54:35 2015 +

Return 204 rather than 403 when no image data

As per http://developer.openstack.org/api-ref-image-v2.html:
 "If no image data exists, the call returns the HTTP 204 status code. "

This commit changed that to 403:

 d4d94b290ceb9147dd285822e201dd85ce812ef0

We should revert to the juno/kilo/liberty behaviour.

APIImpact

Closes-bug: 1549869

Change-Id: Ie9353bc254d11870abc102a7b9b4c7db3917abb4


** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1549869

Title:
  Glance should return 204 when user downloads queued image file

Status in Glance:
  Fix Released

Bug description:
  Previously (In Liberty) when user tried to download file while image
  was in 'queued' status Glance returned 204. In Mitaka this behavior
  was changed  and now Glance returns 403. This is contrary to the
  Glance image api v2 http://developer.openstack.org/api-ref-
  image-v2.html We have to return it back.

  Previously: http://paste.openstack.org/show/487782/

  Now:  http://paste.openstack.org/show/488210/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1549869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1522329] Re: Check 'destination_type' instead of 'source_type' when boot instance by image and a volume with name 'vda'

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/252836
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=66157aaeadc23d2183dae9046516abad0bcb81d5
Submitter: Jenkins
Branch:master

commit 66157aaeadc23d2183dae9046516abad0bcb81d5
Author: Kevin_Zheng 
Date:   Thu Dec 3 17:18:05 2015 +0800

Check 'destination_type' instead of 'source_type' in 
_check_and_transform_bdm

In compute.api._check_and_transform_bdm() we have a logic to
avoid boot instances with both image-ref and a volume named
as 'vda' is supplied. Currently, we check the bdm's 'source_type',
but infact we should check its' 'destination_type' as this
shows it is a cinder volume.

Change-Id: I1fe2cf7c6655e0e0c61371c6d7379ecfc7071cec
Closes-Bug: #1522329


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1522329

Title:
  Check 'destination_type' instead of 'source_type' when boot instance
  by image and a volume with name 'vda'

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/api.py#n796 
, we have a logic to identify
  whether the instance is boot using image and also a volume named 'vda'. But 
we use 'source_type' to identify.
  We should use 'destination_type' = 'volume' as the identifier, as this will 
actually identify that a cinder volume 
  will be added as 'vda'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1522329/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548980] Re: nova list --deleted as admin fails with 404

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/283820
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=3d6bb233828ce63ae649e98e02dc59e04f3db2f5
Submitter: Jenkins
Branch:master

commit 3d6bb233828ce63ae649e98e02dc59e04f3db2f5
Author: Matt Riedemann 
Date:   Tue Feb 23 16:34:39 2016 -0500

Don't lazy-load instance.services if the instance is deleted

The 2.16 microversion added the host_status extended
server attribute which relies on the instance.services field.

The primary join in the database for that field is dependent on
the instance not being deleted.

When listing deleted instances at microversion>=2.16, the
compute API attempts to lazy-load the instance.services field
which fails with an InstanceNotFound because the instance
is deleted.

In this case, it's best to just set instance.services to an
empty ServiceList when lazy loading the services field on a
deleted instance since the DB object won't have any value for
the services attribute anyway.

Change-Id: Ic2f239f634f917a5771b0401a5073546c710c036
Closes-Bug: #1548980


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1548980

Title:
  nova list --deleted as admin fails with 404

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Mitaka devstack created about a week ago:

  stack@neutron:~/python-novaclient$ cd /opt/stack/nova
  stack@neutron:~/nova$ git log -1
  commit 11019fab7a2415cbca8b93e9346b21327e79688d
  Author: bhagyashris 
  Date:   Tue Feb 16 01:13:23 2016 -0800

  Remove duplicate key from dictionary

  There is a duplicate dictionary key entry in test_vmops.py.
  Removed duplicate key 'display_name' from dictionary.

  TrivialFix

  Change-Id: I4e779bceb26077b95bd3ae4ab19e60152c126e34
  stack@neutron:~/nova$

  
  --

  I have a deleted instance:

  mysql> select id,uuid,display_name,deleted from nova.instances;
  ++--+--+-+
  | id | uuid | display_name | deleted |
  ++--+--+-+
  |  1 | 55b9808b-7e01-44ba-ab84-c0bac34d57f1 | test1|   1 |
  ++--+--+-+
  1 row in set (0.00 sec)

  
  I try to list deleted instances using 'nova list --deleted' and it fails with 
a 404.

  Checking the n-api logs there is an InstanceNotFound, it looks like
  when lazy-loading the instance.services field:

  2016-02-23 20:17:25.103 DEBUG nova.objects.instance 
[req-4f701f32-d988-4ae0-93f5-11a4591b297e admin alt_demo] Lazy-loading 
'services' on Instance uuid 55b9808b-7e01-44ba-ab84-c0bac34d57f1 from 
(pid=17965) obj_load_attr /opt/stack/nova/nova/objects/instance.py:879
  2016-02-23 20:17:25.168 ERROR nova.api.openstack 
[req-4f701f32-d988-4ae0-93f5-11a4591b297e admin alt_demo] Caught error: 
Instance 55b9808b-7e01-44ba-ab84-c0bac34d57f1 could not be found.
  2016-02-23 20:17:25.168 TRACE nova.api.openstack Traceback (most recent call 
last):
  2016-02-23 20:17:25.168 TRACE nova.api.openstack   File 
"/opt/stack/nova/nova/api/openstack/__init__.py", line 140, in __call__
  2016-02-23 20:17:25.168 TRACE nova.api.openstack return 
req.get_response(self.application)
  2016-02-23 20:17:25.168 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1317, in send
  2016-02-23 20:17:25.168 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2016-02-23 20:17:25.168 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/request.py", line 1281, in 
call_application
  2016-02-23 20:17:25.168 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2016-02-23 20:17:25.168 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
  2016-02-23 20:17:25.168 TRACE nova.api.openstack return resp(environ, 
start_response)
  2016-02-23 20:17:25.168 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
  2016-02-23 20:17:25.168 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
  2016-02-23 20:17:25.168 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
  2016-02-23 20:17:25.168 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
  2016-02-23 20:17:25.168 TRACE nova.api.openstack   File 
"/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py",
 line 457, in __call__
  2016-02-23 20:17:25.168 TRACE nova.api.openstack response = 
req.get_response(self._app)
  2016-02-23 20:17:25.168 TR

[Yahoo-eng-team] [Bug 1551836] Re: CORS middleware's latent configuration options need to change

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288528
Committed: 
https://git.openstack.org/cgit/openstack/solum/commit/?id=dbfdf9f5db2c216e222dfa20e49e7afba932f34c
Submitter: Jenkins
Branch:master

commit dbfdf9f5db2c216e222dfa20e49e7afba932f34c
Author: Michael Krotscheck 
Date:   Fri Mar 4 07:29:05 2016 -0800

Moved CORS middleware configuration into oslo-config-generator

The default values needed for solum's implementation of cors
middleware have been moved from paste.ini into the configuration
hooks provided by oslo.config. Furthermore, these values have been
added to the default configuration parsing. This ensures
that if a value remains unset in solum.conf, it will be set
to use sane defaults, and that an operator modifying the
configuration file will be presented with a default set of
necessary sane headers.

Change-Id: I6f30224ac1b11fc4019dbc5ae5ec1e1fedbfe97d
Closes-Bug: 1551836


** Changed in: solum
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1551836

Title:
  CORS middleware's latent configuration options need to change

Status in Aodh:
  In Progress
Status in Barbican:
  In Progress
Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in cloudkitty:
  In Progress
Status in congress:
  In Progress
Status in Cue:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  In Progress
Status in heat:
  In Progress
Status in Ironic:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  In Progress
Status in Mistral:
  In Progress
Status in Murano:
  In Progress
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.config:
  Fix Released
Status in Sahara:
  In Progress
Status in OpenStack Search (Searchlight):
  In Progress
Status in Solum:
  Fix Released
Status in Trove:
  In Progress

Bug description:
  It was pointed out in http://lists.openstack.org/pipermail/openstack-
  dev/2016-February/086746.html that configuration options included in
  paste.ini are less than optimal, because they impose an upgrade burden
  on both operators and engineers. The following discussion expanded to
  all projects (not just those using paste), and the following
  conclusion was reached:

  A) All generated configuration files should contain any headers which the API 
needs to operate. This is currently supported in oslo.config's generate-config 
script, as of 3.7.0
  B) These same configuration headers should be set as defaults for the given 
API, using cfg.set_defaults. This permits an operator to simply activate a 
domain, and not have to worry about tweaking additional settings.
  C) All hardcoded headers should be detached from the CORS middleware.
  D) Configuration and activation of CORS should be consistent across all 
projects.

  It was also agreed that this is a blocking bug for mitaka. A reference
  patch has already been approved for keystone, available here:
  https://review.openstack.org/#/c/285308/

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1551836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551333] Re: Horizon should make use of the new handling of default subnetpools in Mitaka

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/286163
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=5a68857bfc39bf833facbe76d1fe01ea43df876f
Submitter: Jenkins
Branch:master

commit 5a68857bfc39bf833facbe76d1fe01ea43df876f
Author: Frode Nordahl 
Date:   Mon Feb 29 18:05:49 2016 +0100

Deprecate default_*_subnet_pool_label options

Starting with Mitaka, Neutron API handing of default subnetpool has
changed [1].

If a default subnetpool exists in Neutron it will show up in the
subnetpool list. Thus no change in Horizon is needed to handle this
part.

The following changes are required in Horizon:
- Mark the 'default_ipv4_subnet_pool_label' and
  'default_ipv6_subnet_pool_label' configuration options for use
  with Liberty only, deprecate them and tag them for removal in
  future release.
- When the configuration options are removed the _check_subnet_data
  function should no longer allow to pass empty 'cidr' and
  'subnetpool_id'.

References:
1: http://docs.openstack.org/releasenotes/neutron/unreleased.html

Change-Id: Ib1d4143251421d03e4e9c3071d43d2423e3b0d8c
Closes-Bug: #1551333


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1551333

Title:
  Horizon should make use of the new handling of  default subnetpools in
  Mitaka

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Starting with Mitaka, Neutron API handing of default subnetpool has
  changed [1].

  If a default subnetpool exists in Neutron it will show up in the
  subnetpool list. Thus no change in Horizon is needed to handle this
  part.

  The following changes are required in Horizon:
  - Mark the 'default_ipv4_subnet_pool_label' and 
'default_ipv6_subnet_pool_label' configuration options for use with Liberty 
only, deprecate them and tag them for removal in future release.
  - When the configuration options are removed the _check_subnet_data function 
should no longer allow to pass empty 'cidr' and 'subnetpool_id'.

  References:
  1: http://docs.openstack.org/releasenotes/neutron/unreleased.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1551333/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553389] [NEW] Branding: Table Action dropdown hovers theme issue

2016-03-04 Thread Diana Whitten
Public bug reported:

Branding: Table Action dropdown hovers should inherit from the theme
better.   Right now, they only toggle the font color, which actually
doesn't look very good in themes that have a primary color that clashes
with the danger color.

See Darkly screenshot here:
https://i.imgur.com/7uGXEVt.png

** Affects: horizon
 Importance: Low
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Diana Whitten (hurgleburgler)

** Changed in: horizon
   Status: New => In Progress

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553389

Title:
  Branding: Table Action dropdown hovers theme issue

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Branding: Table Action dropdown hovers should inherit from the theme
  better.   Right now, they only toggle the font color, which actually
  doesn't look very good in themes that have a primary color that
  clashes with the danger color.

  See Darkly screenshot here:
  https://i.imgur.com/7uGXEVt.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541621] Re: Invalid fernet X-Subject-Token token should result in 404 instead of 401

2016-03-04 Thread Guang Yee
** Also affects: keystone/liberty
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1541621

Title:
  Invalid fernet X-Subject-Token token should result in 404 instead of
  401

Status in OpenStack Identity (keystone):
  Fix Released
Status in OpenStack Identity (keystone) liberty series:
  New

Bug description:
  When a scoped fernet token is no longer valid (i.e. all the roles had
  been removed from the scope), token validation should result in 404
  instead of 401. According to Keystone V3 API spec, 401 is returned
  only if X-Auth-Token is invalid [0]. Invalid X-Subject-Token should
  yield 404. Furthermore, auth_token middleware only treat 404 as
  invalid subject token and cache it accordingly [1]. Improper 401 will
  cause unnecessary churn as middleware will repeatedly attempt to  re-
  authenticate the service user.

  
  To reproduce the problem:

  1. get a project scoped token
  2. remove all the roles assigned to the user for that project
  3. attempt to validate that project-scoped token will result in 401

  [0] 
https://github.com/openstack/keystone-specs/blob/master/api/v3/identity-api-v3.rst#401-unauthorized
  [1] 
https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/_identity.py#L215

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1541621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553420] [NEW] Disabling the Publicizing of Glance Image does not work.

2016-03-04 Thread Dan Nguyen
Public bug reported:

The Glance policy file contains a property that controls the ability to
publicize a Glance Image.  In Horizon we use this policy check to Hide
the checkbox on Project > Images > Create Image.  This is working as
expected.

For Updating an Image we attempt to make the Public checkbox read-only
which isn't enough to disable it.

To test try the following:

1) Update the glance_policy.json file in Horizon (located here 
horizon/openstack_dashboard/conf/) to reflect this rule:
...   
 "publicize_image": "role:admin",
...
2) Create a non-admin user and try to Create a new Glance Image
3) Observe that there is no Public check box.
4) This is expected.
5) Continue creating the image.
6) Once the image is created, Click on Edit Image
7) Notice that Public checkbox is there and you can select it.
8) This is not expected.

Expected Behavior:
For the Edit Image modal, the checkbox should be disabled like this.

https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/images/images/forms.py#L344

if not policy.check((("image", "publicize_image"),), request):
self.fields['public'].widget = forms.CheckboxInput(
attrs={'readonly': 'readonly', 'disabled': disabled})

** Affects: horizon
 Importance: Undecided
 Assignee: Dan Nguyen (daniel-a-nguyen)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Dan Nguyen (daniel-a-nguyen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553420

Title:
  Disabling the Publicizing of Glance Image does not work.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Glance policy file contains a property that controls the ability
  to publicize a Glance Image.  In Horizon we use this policy check to
  Hide the checkbox on Project > Images > Create Image.  This is working
  as expected.

  For Updating an Image we attempt to make the Public checkbox read-only
  which isn't enough to disable it.

  To test try the following:

  1) Update the glance_policy.json file in Horizon (located here 
horizon/openstack_dashboard/conf/) to reflect this rule:
  ...   
   "publicize_image": "role:admin",
  ...
  2) Create a non-admin user and try to Create a new Glance Image
  3) Observe that there is no Public check box.
  4) This is expected.
  5) Continue creating the image.
  6) Once the image is created, Click on Edit Image
  7) Notice that Public checkbox is there and you can select it.
  8) This is not expected.

  Expected Behavior:
  For the Edit Image modal, the checkbox should be disabled like this.

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/images/images/forms.py#L344

  if not policy.check((("image", "publicize_image"),), request):
  self.fields['public'].widget = forms.CheckboxInput(
  attrs={'readonly': 'readonly', 'disabled': disabled})

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553420/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551689] Re: Adding member in a list causes 500

2016-03-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/286526
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=6c6151af8ee8c4f5170f95b862b2b5b78f7fff8a
Submitter: Jenkins
Branch:master

commit 6c6151af8ee8c4f5170f95b862b2b5b78f7fff8a
Author: Niall Bunting 
Date:   Tue Mar 1 11:52:04 2016 +

Creating or updating a image member in a list causes 500

This change catches a type error that can be raised if a user mistakenly
uses a list over a dict. This occurs in both adding a new member and
updating a member that has already been added.

This will cause a HTTPBadRequest to be raised with instructions how to
fix the problem.

Change-Id: I6af3e0ae45ee535859c4ad278ccf995643225585
Closes-Bug: 1551689


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1551689

Title:
  Adding member in a list causes 500

Status in Glance:
  Fix Released

Bug description:
  Overview:
  One of the responses that wsgi returns reports: "Unexpected body type. 
Expected list/dict." I then tried supplying the new member as a list for it to 
fail.

  How to reproduce:
  The command:
  nib@VM:~/devstack/devstack$ curl -H "X-Auth-Token: $token" -X POST 
http://127.0.0.1:9292/v2/images/a394fe2a-bc8b-4485-9819-e264d278e45f/members -d 
'["f9544674f852450faf5b595a38f4e98f"]'
  
   
500 Internal Server Error
   
   
500 Internal Server Error
The server has either erred or is incapable of performing the requested 
operation.

  Partial stack trace:
  2016-03-01 11:35:07.189 TRACE glance.common.wsgi   File 
"/opt/stack/glance/glance/api/v2/image_members.py", line 249, in create
  2016-03-01 11:35:07.189 TRACE glance.common.wsgi member_id = 
body['member']
  2016-03-01 11:35:07.189 TRACE glance.common.wsgi TypeError: list indices must 
be integers, not str

  Output:
  500

  Expected:
  400

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1551689/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537510] Re: test_models_sync may not always detect if a model is not included in head.py

2016-03-04 Thread Armando Migliaccio
** Changed in: neutron
Milestone: mitaka-rc1 => None

** Changed in: neutron
   Status: In Progress => Won't Fix

** Changed in: neutron
 Assignee: Henry Gessau (gessau) => (unassigned)

** Changed in: neutron
   Importance: Low => Undecided

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1537510

Title:
  test_models_sync may not always detect if a model is not included in
  head.py

Status in neutron:
  Won't Fix

Bug description:
  Change https://review.openstack.org/212213 added some models but they
  were not added to head.py. This should have been detected by
  test_models_sync in the functional job, but it is currently not being
  detected. I tried the test_models_sync locally and it correctly fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1537510/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553451] [NEW] Add timestamp for neutron core resources

2016-03-04 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/213586
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 4c2c983618ddb7a528c9005b0d7aaf5322bd198d
Author: ZhaoBo 
Date:   Thu Feb 18 13:28:58 2016 +0800

Add timestamp for neutron core resources

Currently, neutron core resources (like net, subnet, port and subnetpool)
do not save time-stamps upon their creation and updation. This
information can be critical for debugging purposes.

This patch introduces a new extension called "timestamp" extending existing
the neutron core resources to allow their creation and modification times
to be record. Now this patch add this resource schema and the functions 
which
listen db events to add timestamp fields.

APIImpact
DocImpact: Neutron core resources now contain 'timestamp' fields like
   'created_at' and 'updated_at'

Change-Id: I24114b464403435d9c1e1e123d2bc2f37c8fc6ea
Partially-Implements: blueprint add-port-timestamp

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1553451

Title:
  Add timestamp for neutron core resources

Status in neutron:
  New

Bug description:
  https://review.openstack.org/213586
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 4c2c983618ddb7a528c9005b0d7aaf5322bd198d
  Author: ZhaoBo 
  Date:   Thu Feb 18 13:28:58 2016 +0800

  Add timestamp for neutron core resources
  
  Currently, neutron core resources (like net, subnet, port and subnetpool)
  do not save time-stamps upon their creation and updation. This
  information can be critical for debugging purposes.
  
  This patch introduces a new extension called "timestamp" extending 
existing
  the neutron core resources to allow their creation and modification times
  to be record. Now this patch add this resource schema and the functions 
which
  listen db events to add timestamp fields.
  
  APIImpact
  DocImpact: Neutron core resources now contain 'timestamp' fields like
 'created_at' and 'updated_at'
  
  Change-Id: I24114b464403435d9c1e1e123d2bc2f37c8fc6ea
  Partially-Implements: blueprint add-port-timestamp

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1553451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp