[Yahoo-eng-team] [Bug 1422699] Re: glance api doesn't abort start up on Store configuration errors

2015-09-04 Thread Flavio Percoco
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1422699

Title:
  glance api doesn't abort start up on Store configuration errors

Status in Glance:
  Fix Released
Status in Glance kilo series:
  New
Status in glance_store:
  Fix Released

Bug description:
  Glance api service does not abort start up when errors in glance-api.cfg file 
are encountered.
  It would make sense to abort service start up when a BadStoreConfiguration 
exception is encountered, instead of just sending the error to the logs and 
disabling adding images to that Store.

  For example if a Filesystem Storage Backend with multiple store is configured 
with a duplicate directory:
  filesystem_store_datadirs=/mnt/nfs1/images/:200
  filesystem_store_datadirs=/mnt/nfs1/images/:100

  Logs will have the error:
  ERROR glance_store._drivers.filesystem [-] Directory /mnt/nfs1/image 
specified multiple times in filesystem_store_datadirs option of filesystem 
configuration
  TRACE glance_store._drivers.filesystem None
  TRACE glance_store._drivers.filesystem
  WARNING glance_store.driver [-] Failed to configure store correctly: None 
Disabling add method.

  Service will start and when client tries to add an image he will
  receive a 410 Gone error saying: Error in store configuration. Adding
  images to store is disabled.

  This affects not only the filesystem storage backend but all glance-
  storage drivers that encounter an error in the configuration and raise
  a BadStoreConfiguration exception.

  How reproducible:
  Every time

  Steps to Reproduce:
  1. Configure Glance to use  Filesystem Storage Backend with multiple store 
and duplicate a filesystem_storage_datadirs.
  2. Run glance api

  Expected behavior:
  Glance api service should not have started and should have reported that the 
directory was specified multiple times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1422699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490528] Re: Strange logic for shared images for different tenants/users

2015-09-04 Thread Timur Nurlygayanov
Not reproduced, looks like some configuration issue on my previous lab.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1490528

Title:
  Strange logic for shared images for different tenants/users

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Nova version: 1:2015.1.1-1 (OpenStack Kilo release)

  Steps To Reproduce:
  1. Login to OpenStack horizon dashboard as admin user
  2. Upload Ubuntu cloud image into Glance
  3. Boot VM 'test1' from Ubuntu image
  4. Install stress tool on the VM: 'sudo apt-get install stress'
  5. Add 'stress' tool in autorun: 'sudo echo "stress --cpu 10 &" > 
/etc/rc.local'
  6. Reboot VM 'test1'
  7. Make a snapshot of 'test1' VM
  8. Mark this snapshot as 'public' in Glance
  9. Create 10 VMs with this image (snapshot) from admin user. All VMs became 
Active in several seconds.
  10. Login as non-admin user in another tenant (for example, user 'test' in 
tenant 'my-project')
  11. Boot 10 VMs with public image 'TestImage'

  Expected Result:
  VMs will start as quickly as for the admin tenant.

  Observed Result:
  VMs hang in "Downloading Image" operation, it takes more than 20 minutes to 
run VM from the snapshot in other tenants (but in the same time it takes a few 
seconds to run VMs from the same image from tenant where we have created the 
image).

  Looks like Nova tries to copy this image for each new tenant, and it
  doesn't looks correct.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1490528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492182] [NEW] word spelling mistake in django.po

2015-09-04 Thread liujh
Public bug reported:

In */LC_MESSAGES/django.po file, original description is as following:


msgctxt "status of a neteork port"
msgid "Error"
msgstr ""


 I think " neteork " should be "network " .

** Affects: horizon
 Importance: Undecided
 Assignee: liujh (t09sunny)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => liujh (t09sunny)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492182

Title:
  word spelling mistake in django.po

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In */LC_MESSAGES/django.po file, original description is as following:

  
  msgctxt "status of a neteork port"
  msgid "Error"
  msgstr ""
  

   I think " neteork " should be "network " .

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492242] [NEW] novaclient is unable to update quotas via Nova V2.1

2015-09-04 Thread Andrey Kurilin
Public bug reported:

python-novaclient transmits always tenant-id in requests body for quotas
update, but Nova V2.1 (nova v2 is ok with it) doesn't accept this
parameter and fails.

ERROR (BadRequest): Invalid input for field/attribute quota_set. Value:
{u'tenant_id': u'582df899eabc47018c96713c2f7196ba', u'security_groups':
15}. Additional properties are not allowed (u'tenant_id' was unexpected)
(HTTP 400) (Request-ID: req-8bbb5dda-c6f2-4126-b88e-c3949a85f8ff)

Found in rally gates: http://logs.openstack.org/29/184629/19/check/gate-
rally-dsvm-rally-nova/02014e2/rally-
plot/results.html.gz#/Quotas.nova_update/failures

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: python-novaclient
 Importance: Undecided
 Status: New


** Tags: nova quota-update

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492242

Title:
  novaclient is unable to update quotas via Nova V2.1

Status in OpenStack Compute (nova):
  New
Status in python-novaclient:
  New

Bug description:
  python-novaclient transmits always tenant-id in requests body for
  quotas update, but Nova V2.1 (nova v2 is ok with it) doesn't accept
  this parameter and fails.

  ERROR (BadRequest): Invalid input for field/attribute quota_set.
  Value: {u'tenant_id': u'582df899eabc47018c96713c2f7196ba',
  u'security_groups': 15}. Additional properties are not allowed
  (u'tenant_id' was unexpected) (HTTP 400) (Request-ID: req-8bbb5dda-
  c6f2-4126-b88e-c3949a85f8ff)

  Found in rally gates: http://logs.openstack.org/29/184629/19/check
  /gate-rally-dsvm-rally-nova/02014e2/rally-
  plot/results.html.gz#/Quotas.nova_update/failures

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492249] [NEW] regression in network performance causing large-ops jobs failures

2015-09-04 Thread Sean Dague
Public bug reported:

We're getting a pretty increased failure rate with large-ops jobs in
Nova, and it looks like it's around message timeouts in tearing down
networks. This very distinctly started showing up this week in the race
to merge code.

Example failure: http://logs.openstack.org/29/220229/3/check/gate-
tempest-dsvm-large-
ops/4b9bd8f/logs/screen-n-cpu-1.txt.gz#_2015-09-04_10_47_08_361


2015-09-04 10:47:08.226 ERROR nova.compute.manager 
[req-bd86a3c8-121b-43c9-ac02-c1d3dcdb2136 
tempest-TestLargeOpsScenario-1437588360 
tempest-TestLargeOpsScenario-2081713896] [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] Failed to deallocate network for instance.
2015-09-04 10:47:08.329 DEBUG nova.compute.manager 
[req-08f02702-32ba-44f2-aef3-269a0eff3550 
tempest-TestLargeOpsScenario-1437588360 
tempest-TestLargeOpsScenario-2081713896] [instance: 
2999f5a3-746b-4c7a-9dcd-0f82c031f540] terminating bdm 
BlockDeviceMapping(boot_index=0,connection_info=None,created_at=2015-09-04T10:42:57Z,delete_on_termination=True,deleted=False,deleted_at=None,destination_type='local',device_name='/dev/sda',device_type='disk',disk_bus=None,guest_format=None,id=281,image_id='4850537c-18cc-438b-b154-1aafb3c36ec8',instance=,instance_uuid=2999f5a3-746b-4c7a-9dcd-0f82c031f540,no_device=False,snapshot_id=None,source_type='image',updated_at=2015-09-04T10:43:54Z,volume_id=None,volume_size=None)
 _cleanup_volumes /opt/stack/new/nova/nova/compute/manager.py:2299
2015-09-04 10:47:08.361 ERROR nova.compute.manager 
[req-bd86a3c8-121b-43c9-ac02-c1d3dcdb2136 
tempest-TestLargeOpsScenario-1437588360 
tempest-TestLargeOpsScenario-2081713896] [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] Setting instance vm_state to ERROR
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] Traceback (most recent call last):
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2396, in 
do_terminate_instance
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] self._delete_instance(context, 
instance, bdms, quotas)
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e]   File 
"/opt/stack/new/nova/nova/hooks.py", line 149, in inner
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] rv = f(*args, **kwargs)
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2375, in _delete_instance
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] quotas.rollback()
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] six.reraise(self.type_, self.value, 
self.tb)
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2338, in _delete_instance
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] self._shutdown_instance(context, 
instance, bdms)
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2265, in _shutdown_instance
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] self._try_deallocate_network(context, 
instance, requested_networks)
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2194, in 
_try_deallocate_network
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] 
self._set_instance_obj_error_state(context, instance)
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] six.reraise(self.type_, self.value, 
self.tb)
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2189, in 
_try_deallocate_network
2015-09-04 10:47:08.361 8218 ERROR nova.compute.manager [instance: 
da1c0939-67ae-47bf-a403-1323ae2e1f7e] self._deallocate_network(context, 

[Yahoo-eng-team] [Bug 1492140] Re: consoleauth token displayed in log file

2015-09-04 Thread Jeremy Stanley
Since this report concerns a possible security risk, an incomplete
security advisory task has been added while the core security reviewers
for the affected project or projects confirm the bug and discuss the
scope of any vulnerability along with potential solutions.

I've switched this report from public to public security since it seems
to describe a potential vulnerability.

** Information type changed from Public to Public Security

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492140

Title:
  consoleauth token displayed in log file

Status in OpenStack Compute (nova):
  New
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  when instance console is accessed auth token is displayed nova-
  consoleauth.log

  nova-consoleauth.log:874:2015-09-02 14:20:36 29941 INFO 
nova.consoleauth.manager [req-6bc7c116-5681-43ee-828d-4b8ff9d566d0 
fe3cd6b7b56f44c9a0d3f5f2546ad4db 37b377441b174b8ba2deda6a6221e399] Received 
Token: f8ea537c-b924-4d92-935e-4c22ec90d5f7, {'instance_uuid': 
u'dd29a899-0076-4978-aa50-8fb752f0c3ed', 'access_url': 
u'http://192.168.245.9:6080/vnc_auto.html?token=f8ea537c-b924-4d92-935e-4c22ec90d5f7',
 'token': u'f8ea537c-b924-4d92-935e-4c22ec90d5f7', 'last_activity_at': 
1441203636.387588, 'internal_access_path': None, 'console_type': u'novnc', 
'host': u'192.168.245.6', 'port': u'5900'}
  nova-consoleauth.log:881:2015-09-02 14:20:52 29941 INFO 
nova.consoleauth.manager [req-a29ab7d8-ab26-4ef2-b942-9bb02d5703a0 None None] 
Checking Token: f8ea537c-b924-4d92-935e-4c22ec90d5f7, True

  and

  nova-novncproxy.log:30:2015-09-02 14:20:52 31927 INFO
  nova.console.websocketproxy [req-a29ab7d8-ab26-4ef2-b942-9bb02d5703a0
  None None]   3: connect info: {u'instance_uuid':
  u'dd29a899-0076-4978-aa50-8fb752f0c3ed', u'internal_access_path':
  None, u'last_activity_at': 1441203636.387588, u'console_type':
  u'novnc', u'host': u'192.168.245.6', u'token': u'f8ea537c-b924-4d92
  -935e-4c22ec90d5f7', u'access_url':
  u'http://192.168.245.9:6080/vnc_auto.html?token=f8ea537c-b924-4d92
  -935e-4c22ec90d5f7', u'port': u'5900'}

  This token has a short lifetime but the exposure still represents a
  potential security weakness, especially as the log record in question
  are INFO level and thus available via centralized logging. A user with
  real time access to these records could mount a denial of service
  attack by accessing the instance console and performing a ctl alt del
  to reboot it

  Alternatively data privacy could be compromised if the attacker were
  able to obtain user credentials

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492228] [NEW] SR-IOV port doesn't reach OVS port on same compute node

2015-09-04 Thread Pedro Sousa
Public bug reported:

Hi,

I'm using Neutron Kilo with openvswitch and sriovnicswitch mechanism
driver in ml2 plugin.

Everything works fine, except I cannot reach ovs  port instance from sr-
iov port instance and vice-versa, that reside on the same compute node,
that uses a br-int bridge. In different compute nodes it works fine, but
my understanding in this configuration is that it uses an physical
interface bridge, br-p2p1.

RDO Kilo Centos 7.1

python-neutron-2015.1.1-1.el7.noarch
openstack-neutron-openvswitch-2015.1.1-1.el7.noarch
python-neutronclient-2.4.0-1.el7.noarch
openstack-neutron-ml2-2015.1.1-1.el7.noarch
openstack-neutron-common-2015.1.1-1.el7.noarch
openstack-neutron-2015.1.1-1.el7.noarch
openstack-neutron-sriov-nic-agent-2015.1.1-1.el7.noarch
openvswitch-2.3.1-2.el7.x86_64
kernel-3.10.0-229.11.1.el7.x86_64

Regards,
Pedro Sousa

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492228

Title:
  SR-IOV port doesn't reach OVS port on same compute node

Status in neutron:
  New

Bug description:
  Hi,

  I'm using Neutron Kilo with openvswitch and sriovnicswitch mechanism
  driver in ml2 plugin.

  Everything works fine, except I cannot reach ovs  port instance from
  sr-iov port instance and vice-versa, that reside on the same compute
  node, that uses a br-int bridge. In different compute nodes it works
  fine, but my understanding in this configuration is that it uses an
  physical interface bridge, br-p2p1.

  RDO Kilo Centos 7.1

  python-neutron-2015.1.1-1.el7.noarch
  openstack-neutron-openvswitch-2015.1.1-1.el7.noarch
  python-neutronclient-2.4.0-1.el7.noarch
  openstack-neutron-ml2-2015.1.1-1.el7.noarch
  openstack-neutron-common-2015.1.1-1.el7.noarch
  openstack-neutron-2015.1.1-1.el7.noarch
  openstack-neutron-sriov-nic-agent-2015.1.1-1.el7.noarch
  openvswitch-2.3.1-2.el7.x86_64
  kernel-3.10.0-229.11.1.el7.x86_64

  Regards,
  Pedro Sousa

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492228/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492121] Re: VMware: failed volume detachment leads to instances remaining on backend and volume still in 'in-use' state

2015-09-04 Thread Jeremy Stanley
Since this report concerns a possible security risk, an incomplete
security advisory task has been added while the core security reviewers
for the affected project or projects confirm the bug and discuss the
scope of any vulnerability along with potential solutions.

I've switched this report from private security to public security
because it was prematurely disclosed (a proposed fix explicitly
mentioning the bug was pushed to public code review rather than uploaded
as a bug attachment).

** Information type changed from Private Security to Public Security

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492121

Title:
  VMware: failed volume detachment leads to instances remaining on
  backend and volume still in 'in-use' state

Status in OpenStack Compute (nova):
  New
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  When the volume detachment fails the termination of the instance will lead to 
the following:
  1. The Nova instance is deleted
  2. The Instance on the VC still exists
  3. The volume is in 'in-use' state

  The nova instance is deleted but the backend is not updated and the
  volumes are not set as available

  One example of this happening is when the spawning of the instance fails with 
an exception when attaching the volume.
  This issue could lead to a DDOS of the backend as the resources on the 
backend are not cleaned up correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492121/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492265] [NEW] NG Launch Instance fails unlimited instance quota

2015-09-04 Thread Matt Borland
Public bug reported:

The Angular Launch Instance wizard fails to operate if the instance
quota is unlimited (set to -1).

This is largely due to logic that assumes a value greater than zero.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492265

Title:
  NG Launch Instance fails unlimited instance quota

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Angular Launch Instance wizard fails to operate if the instance
  quota is unlimited (set to -1).

  This is largely due to logic that assumes a value greater than zero.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492270] Re: SortedDict class is deprecated since Django 1.7 and will be removed in 1.9

2015-09-04 Thread Timur Sufiev
There is a blueprint for this,
https://blueprints.launchpad.net/openstack/?searchtext=replace-
sorteddict-with-ordereddict

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492270

Title:
  SortedDict class is deprecated since Django 1.7 and will be removed in
  1.9

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Thus django.utils.datastructures.SortedDict should be replaced with
  collections.OrderedDict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492293] [NEW] Openstack Dashboard could forbid access when service unavailable

2015-09-04 Thread Paul Karikh
Public bug reported:

Openstack Dashboard could forbid access to the some panels if some service is 
unavailable/unreachable, Horizon could interpret it as a lack of rights and 
reject query.
For example, user tries to access remote openstack host, where keystone admin 
port (35357) is unreachable from the outside. 
In that case keystone client recieves 401 error, which causes redirect to the 
login page.
Such behaviour could be a problem if some other service will be unavailable - 
there could be a case when Horizon could believe that user has no enough rights 
and forbid access to the page or panel.
So we need some investigation here.

** Affects: horizon
 Importance: Undecided
 Assignee: Paul Karikh (pkarikh)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Paul Karikh (pkarikh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492293

Title:
  Openstack Dashboard could forbid access when service unavailable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Openstack Dashboard could forbid access to the some panels if some service is 
unavailable/unreachable, Horizon could interpret it as a lack of rights and 
reject query.
  For example, user tries to access remote openstack host, where keystone admin 
port (35357) is unreachable from the outside. 
  In that case keystone client recieves 401 error, which causes redirect to the 
login page.
  Such behaviour could be a problem if some other service will be unavailable - 
there could be a case when Horizon could believe that user has no enough rights 
and forbid access to the page or panel.
  So we need some investigation here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492255] [NEW] Cells gate job fails because of 2 network tests

2015-09-04 Thread Sylvain Bauza
Public bug reported:

===
2015-09-04 11:41:29.466 | Failed 2 tests - output below:
2015-09-04 11:41:29.467 | ==
2015-09-04 11:41:29.467 | 
2015-09-04 11:41:29.467 | 
tempest.api.compute.test_networks.ComputeNetworksTest.test_list_networks[id-3fe07175-312e-49a5-a623-5f52eeada4c2]
2015-09-04 11:41:29.467 | 
-
2015-09-04 11:41:29.467 | 
2015-09-04 11:41:29.467 | Captured traceback:
2015-09-04 11:41:29.467 | ~~~
2015-09-04 11:41:29.467 | Traceback (most recent call last):
2015-09-04 11:41:29.467 |   File "tempest/api/compute/test_networks.py", 
line 37, in test_list_networks
2015-09-04 11:41:29.467 | self.assertNotEmpty(networks, "No networks 
found.")
2015-09-04 11:41:29.467 |   File "tempest/test.py", line 588, in 
assertNotEmpty
2015-09-04 11:41:29.467 | self.assertTrue(len(list) > 0, msg)
2015-09-04 11:41:29.468 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
2015-09-04 11:41:29.468 | raise self.failureException(msg)
2015-09-04 11:41:29.468 | AssertionError: False is not true : No networks 
found.
2015-09-04 11:41:29.468 | 
2015-09-04 11:41:29.468 | 
2015-09-04 11:41:29.468 | Captured pythonlogging:
2015-09-04 11:41:29.468 | ~~~
2015-09-04 11:41:29.468 | 2015-09-04 11:31:55,672 10410 INFO 
[tempest_lib.common.rest_client] Request 
(ComputeNetworksTest:test_list_networks): 200 POST 
http://127.0.0.1:5000/v2.0/tokens
2015-09-04 11:41:29.468 | 2015-09-04 11:31:55,672 10410 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {}
2015-09-04 11:41:29.468 | Body: None
2015-09-04 11:41:29.468 | Response - Headers: 
{'x-openstack-request-id': 'req-d4c4b57f-495a-47d4-b157-4c6fa0c85796', 
'connection': 'close', 'vary': 'X-Auth-Token', 'status': '200', 'date': 'Fri, 
04 Sep 2015 11:31:55 GMT', 'content-length': '3863', 'content-type': 
'application/json', 'server': 'Apache/2.4.7 (Ubuntu)'}
2015-09-04 11:41:29.469 | Body: None
2015-09-04 11:41:29.469 | 2015-09-04 11:31:56,116 10410 INFO 
[tempest_lib.common.rest_client] Request 
(ComputeNetworksTest:test_list_networks): 200 GET 
http://127.0.0.1:8774/v2.1/3c0808e187e34cc998b1e08946c2a928/os-networks 0.443s
2015-09-04 11:41:29.469 | 2015-09-04 11:31:56,116 10410 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'Content-Type': 
'application/json', 'X-Auth-Token': '', 'Accept': 'application/json'}
2015-09-04 11:41:29.469 | Body: None
2015-09-04 11:41:29.469 | Response - Headers: {'content-location': 
'http://127.0.0.1:8774/v2.1/3c0808e187e34cc998b1e08946c2a928/os-networks', 
'x-openstack-nova-api-version': '2.1', 'connection': 'close', 'vary': 
'X-OpenStack-Nova-API-Version', 'x-compute-request-id': 
'req-bc32c33a-31c7-4634-a3d2-70188c08e150', 'status': '200', 'date': 'Fri, 04 
Sep 2015 11:31:56 GMT', 'content-length': '16', 'content-type': 
'application/json'}
2015-09-04 11:41:29.469 | Body: {"networks": []}
2015-09-04 11:41:29.469 | 
2015-09-04 11:41:29.469 | 
2015-09-04 11:41:29.469 | 
tempest.api.compute.test_tenant_networks.ComputeTenantNetworksTest.test_list_show_tenant_networks[id-edfea98e-bbe3-4c7a-9739-87b986baff26]
2015-09-04 11:41:29.469 | 
--
2015-09-04 11:41:29.469 | 
2015-09-04 11:41:29.469 | Captured traceback:
2015-09-04 11:41:29.470 | ~~~
2015-09-04 11:41:29.470 | Traceback (most recent call last):
2015-09-04 11:41:29.470 |   File 
"tempest/api/compute/test_tenant_networks.py", line 29, in 
test_list_show_tenant_networks
2015-09-04 11:41:29.470 | self.assertNotEmpty(tenant_networks, "No 
tenant networks found.")
2015-09-04 11:41:29.470 |   File "tempest/test.py", line 588, in 
assertNotEmpty
2015-09-04 11:41:29.470 | self.assertTrue(len(list) > 0, msg)
2015-09-04 11:41:29.470 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
2015-09-04 11:41:29.470 | raise self.failureException(msg)
2015-09-04 11:41:29.470 | AssertionError: False is not true : No tenant 
networks found.
2015-09-04 11:41:29.470 | 
2015-09-04 11:41:29.470 | 
2015-09-04 11:41:29.471 | Captured pythonlogging:
2015-09-04 11:41:29.471 | ~~~
2015-09-04 11:41:29.471 | 2015-09-04 11:35:56,176 10414 INFO 
[tempest_lib.common.rest_client] Request 
(ComputeTenantNetworksTest:test_list_show_tenant_networks): 200 POST 
http://127.0.0.1:5000/v2.0/tokens
2015-09-04 11:41:29.471 | 2015-09-04 11:35:56,176 10414 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {}
2015-09-04 

[Yahoo-eng-team] [Bug 1492254] [NEW] neutron should not try to bind port on compute with hypervisor_type ironic

2015-09-04 Thread Vasyl Saienko
Public bug reported:

Neutron tries to bind port on compute where instance is launched.  It
doesn't make sense when hypervisor_type is ironic, since VM  not lives
on hypervisor in this case.  Furthermore it lead to failed provisioning
of baremetal node, when neutron is not configured on ironic compute
node.

Setup:
node-1: controller
node-2: ironic-compute without neutron

neutron-server.log: http://paste.openstack.org/show/445388/

** Affects: mos
 Importance: Undecided
 Status: New

** Affects: mos/7.0.x
 Importance: Undecided
 Status: New

** Affects: mos/8.0.x
 Importance: Undecided
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492254

Title:
  neutron should not try to bind port on compute with hypervisor_type
  ironic

Status in Mirantis OpenStack:
  New
Status in Mirantis OpenStack 7.0.x series:
  New
Status in Mirantis OpenStack 8.0.x series:
  New
Status in neutron:
  New

Bug description:
  Neutron tries to bind port on compute where instance is launched.  It
  doesn't make sense when hypervisor_type is ironic, since VM  not lives
  on hypervisor in this case.  Furthermore it lead to failed
  provisioning of baremetal node, when neutron is not configured on
  ironic compute node.

  Setup:
  node-1: controller
  node-2: ironic-compute without neutron

  neutron-server.log: http://paste.openstack.org/show/445388/

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1492254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492254] Re: neutron should not try to bind port on compute with hypervisor_type ironic

2015-09-04 Thread Vasyl Saienko
** Also affects: mos
   Importance: Undecided
   Status: New

** Also affects: mos/8.0.x
   Importance: Undecided
   Status: New

** Changed in: mos/8.0.x
Milestone: None => 7.0

** Also affects: mos/7.0.x
   Importance: Undecided
   Status: New

** Changed in: mos/8.0.x
Milestone: 7.0 => 8.0

** Changed in: mos/7.0.x
Milestone: None => 7.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492254

Title:
  neutron should not try to bind port on compute with hypervisor_type
  ironic

Status in Mirantis OpenStack:
  New
Status in Mirantis OpenStack 7.0.x series:
  New
Status in Mirantis OpenStack 8.0.x series:
  New
Status in neutron:
  New

Bug description:
  Neutron tries to bind port on compute where instance is launched.  It
  doesn't make sense when hypervisor_type is ironic, since VM  not lives
  on hypervisor in this case.  Furthermore it lead to failed
  provisioning of baremetal node, when neutron is not configured on
  ironic compute node.

  Setup:
  node-1: controller
  node-2: ironic-compute without neutron

  neutron-server.log: http://paste.openstack.org/show/445388/

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1492254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492281] [NEW] VPNaaS and FWaaS delete actions need refactoring

2015-09-04 Thread Tatiana Ovchinnikova
Public bug reported:

VPNaaS and FWaaS delete actions are implemented in an unusual way and
have a lot of unnecessary code.

** Affects: horizon
 Importance: Undecided
 Assignee: Tatiana Ovchinnikova (tmazur)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Tatiana Ovchinnikova (tmazur)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492281

Title:
  VPNaaS and FWaaS delete actions need refactoring

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  VPNaaS and FWaaS delete actions are implemented in an unusual way and
  have a lot of unnecessary code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492281/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492270] [NEW] SortedDict class is deprecated since Django 1.7 and will be removed in 1.9

2015-09-04 Thread Timur Sufiev
Public bug reported:

Thus django.utils.datastructures.SortedDict should be replaced with
collections.OrderedDict.

** Affects: horizon
 Importance: Undecided
 Assignee: Timur Sufiev (tsufiev-x)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492270

Title:
  SortedDict class is deprecated since Django 1.7 and will be removed in
  1.9

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Thus django.utils.datastructures.SortedDict should be replaced with
  collections.OrderedDict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492274] [NEW] nova evacuate does not update instance's neutron port location in the DB

2015-09-04 Thread Daniel Gauthier
Public bug reported:


nova evacuate and nova host-evacuate doesn't update the database with
the new neutron port location after the instance has successfully
evacuate.

The instance's neutron port is created on the right compute node and the
neutron port is created correctly using openvswitch. The instance
doesn't lose connectivity.

Everything is fine with migrate/live-migration/host-live-migration


To reproduce:
shutdown a compute node and execute a nova evacuate or a nova host-evacuate.


Expected Result:
neutron port-show  show the neutron port are updated with the new 
neutron port location 


Actual Result:
neutron port-show  still show the previous compute node


Version used :
ii  nova-api 1:2015.1.0-0ubuntu1.1~cloud0  
all  OpenStack Compute - API frontend
ii  nova-cert1:2015.1.0-0ubuntu1.1~cloud0  
all  OpenStack Compute - certificate management
ii  nova-common  1:2015.1.0-0ubuntu1.1~cloud0  
all  OpenStack Compute - common files
ii  nova-conductor   1:2015.1.0-0ubuntu1.1~cloud0  
all  OpenStack Compute - conductor service
ii  nova-novncproxy  1:2015.1.0-0ubuntu1.1~cloud0  
all  OpenStack Compute - NoVNC proxy
ii  nova-scheduler   1:2015.1.0-0ubuntu1.1~cloud0  
all  OpenStack Compute - virtual machine scheduler
ii  python-nova  1:2015.1.0-0ubuntu1.1~cloud0  
all  OpenStack Compute Python libraries
ii  python-novaclient1:2.22.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: evacuate

** Summary changed:

- nova evacuate does not update neutron port location in the DB
+ nova evacuate does not update instance's neutron port location in the DB

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492274

Title:
  nova evacuate does not update instance's neutron port location in the
  DB

Status in OpenStack Compute (nova):
  New

Bug description:

  nova evacuate and nova host-evacuate doesn't update the database with
  the new neutron port location after the instance has successfully
  evacuate.

  The instance's neutron port is created on the right compute node and
  the neutron port is created correctly using openvswitch. The instance
  doesn't lose connectivity.

  Everything is fine with migrate/live-migration/host-live-migration


  To reproduce:
  shutdown a compute node and execute a nova evacuate or a nova host-evacuate.

  
  Expected Result:
  neutron port-show  show the neutron port are updated with the new 
neutron port location 

  
  Actual Result:
  neutron port-show  still show the previous compute node

  
  Version used :
  ii  nova-api 1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - API frontend
  ii  nova-cert1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - certificate management
  ii  nova-common  1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - common files
  ii  nova-conductor   1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - conductor service
  ii  nova-novncproxy  1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - NoVNC proxy
  ii  nova-scheduler   1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute - virtual machine scheduler
  ii  python-nova  1:2015.1.0-0ubuntu1.1~cloud0 
 all  OpenStack Compute Python libraries
  ii  python-novaclient1:2.22.0-0ubuntu1~cloud0 
 all  client library for OpenStack Compute API

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491949] Re: gate-tempest-dsvm-large-ops fails to deallocate instance network due to rpc timeout

2015-09-04 Thread Matt Riedemann
(9:21:42 AM) sdague: so, honestly, the thing is that we're driving an 
unrealistic throughput on one node there
(9:21:54 AM) sdague: because we are using fakevirt
(9:22:39 AM) sdague: I suspect starting 175 vms on one node at once is probably 
not a thing with an actual hypervisor :)

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491949

Title:
  gate-tempest-dsvm-large-ops fails to deallocate instance network due
  to rpc timeout

Status in devstack:
  In Progress
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  http://logs.openstack.org/96/219696/4/check/gate-tempest-dsvm-large-
  ops/158f061/logs/screen-n-cpu-1.txt.gz?level=TRACE

  2015-09-03 15:11:10.090 ERROR nova.compute.manager 
[req-ae96c425-a199-472f-a0db-e1b48147bb4c 
tempest-TestLargeOpsScenario-1690771764 
tempest-TestLargeOpsScenario-1474206998] [instance: 
195361d7-95c3-4740-825b-1acab707969e] Failed to deallocate network for instance.
  2015-09-03 15:11:11.051 ERROR nova.compute.manager 
[req-ae96c425-a199-472f-a0db-e1b48147bb4c 
tempest-TestLargeOpsScenario-1690771764 
tempest-TestLargeOpsScenario-1474206998] [instance: 
195361d7-95c3-4740-825b-1acab707969e] Setting instance vm_state to ERROR
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] Traceback (most recent call last):
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2396, in 
do_terminate_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._delete_instance(context, 
instance, bdms, quotas)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/hooks.py", line 149, in inner
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] rv = f(*args, **kwargs)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2375, in _delete_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] quotas.rollback()
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] six.reraise(self.type_, self.value, 
self.tb)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2338, in _delete_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._shutdown_instance(context, 
instance, bdms)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2265, in _shutdown_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._try_deallocate_network(context, 
instance, requested_networks)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2194, in 
_try_deallocate_network
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] 
self._set_instance_obj_error_state(context, instance)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] six.reraise(self.type_, self.value, 
self.tb)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2189, in 
_try_deallocate_network
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._deallocate_network(context, 
instance, requested_networks)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1812, in _deallocate_network
  2015-09-03 15:11:11.051 22635 ERROR 

[Yahoo-eng-team] [Bug 1464825] Re: alembic migration script for vpnaas error

2015-09-04 Thread Ann Kamyshnikova
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464825

Title:
  alembic migration script for vpnaas error

Status in neutron:
  Invalid

Bug description:
  
  5689aa52_fix_identifier_map_fk.py alembic migration script error 'Cannot 
change column 'ipsec_site_conn_id': used in a foreign key constraint 
'cisco_csr_identifier_map_ibfk_1'


  + /usr/bin/neutron-db-manage --service vpnaas --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini 
upgrade head
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Running upgrade  -> start_neutron_vpnaas, start 
neutron-vpnaas chain
  INFO  [alembic.migration] Running upgrade start_neutron_vpnaas -> 
3ea02b2a773e, add_index_tenant_id
  INFO  [alembic.migration] Running upgrade 3ea02b2a773e -> kilo, kilo
  INFO  [alembic.migration] Running upgrade kilo -> 5689aa52, fix 
identifier map fk
  Traceback (most recent call last):
File "/usr/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/opt/stack/neutron/neutron/db/migration/cli.py", line 238, in main
  CONF.command.func(config, CONF.command.name)
File "/opt/stack/neutron/neutron/db/migration/cli.py", line 106, in 
do_upgrade
  do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
File "/opt/stack/neutron/neutron/db/migration/cli.py", line 72, in 
do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/alembic/command.py", line 165, in 
upgrade
  script.run_env()
File "/usr/lib/python2.7/site-packages/alembic/script.py", line 390, in 
run_env
  util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/site-packages/alembic/util.py", line 243, in 
load_python_file
  module = load_module_py(module_id, path)
File "/usr/lib/python2.7/site-packages/alembic/compat.py", line 79, in 
load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/env.py",
 line 86, in 
  run_migrations_online()
File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/env.py",
 line 77, in run_migrations_online
  context.run_migrations()
File "", line 7, in run_migrations
File "/usr/lib/python2.7/site-packages/alembic/environment.py", line 738, 
in run_migrations
  self.get_context().run_migrations(**kw)
File "/usr/lib/python2.7/site-packages/alembic/migration.py", line 309, in 
run_migrations
  step.migration_fn(**kw)
File 
"/opt/stack/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/versions/5689aa52_fix_identifier_map_fk.py",
 line 48, in upgrade
  existing_nullable=True)
File "", line 7, in alter_column
File "", line 1, in 
File "/usr/lib/python2.7/site-packages/alembic/util.py", line 388, in go
  return fn(*arg, **kw)
File "/usr/lib/python2.7/site-packages/alembic/operations.py", line 478, in 
alter_column
  existing_autoincrement=existing_autoincrement
File "/usr/lib/python2.7/site-packages/alembic/ddl/mysql.py", line 65, in 
alter_column
  else existing_autoincrement
File "/usr/lib/python2.7/site-packages/alembic/ddl/impl.py", line 122, in 
_exec
  return conn.execute(construct, *multiparams, **params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
841, in execute
  return meth(self, multiparams, params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 69, 
in _execute_on_connection
  return connection._execute_ddl(self, multiparams, params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
895, in _execute_ddl
  compiled
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1070, in _execute_context
  context)
File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/compat/handle_error.py", 
line 155, in _handle_dbapi_exception
  e, statement, parameters, cursor, context)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1267, in _handle_dbapi_exception
  util.raise_from_cause(newraise, exc_info)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 
199, in raise_from_cause
  reraise(type(exception), exception, tb=exc_tb)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 
1063, in _execute_context
  context)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", 
line 442, in do_execute
  cursor.execute(statement, 

[Yahoo-eng-team] [Bug 1492254] Re: neutron should not try to bind port on compute with hypervisor_type ironic

2015-09-04 Thread Eugene Nikanorov
** No longer affects: mos

** No longer affects: mos/7.0.x

** No longer affects: mos/8.0.x

** Description changed:

  Neutron tries to bind port on compute where instance is launched.  It
- doesn't make sense when hypervisor_type is ironic, since VM  not lives
- on hypervisor in this case.  Furthermore it lead to failed provisioning
- of baremetal node, when neutron is not configured on ironic compute
- node.
+ doesn't make sense when hypervisor_type is ironic, since VM  does not
+ live on hypervisor in this case.  Furthermore it leads to failed
+ provisioning of baremetal node, when neutron is not configured on ironic
+ compute node.
  
  Setup:
  node-1: controller
  node-2: ironic-compute without neutron
  
  neutron-server.log: http://paste.openstack.org/show/445388/

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492254

Title:
  neutron should not try to bind port on compute with hypervisor_type
  ironic

Status in neutron:
  New

Bug description:
  Neutron tries to bind port on compute where instance is launched.  It
  doesn't make sense when hypervisor_type is ironic, since VM  does not
  live on hypervisor in this case.  Furthermore it leads to failed
  provisioning of baremetal node, when neutron is not configured on
  ironic compute node.

  Setup:
  node-1: controller
  node-2: ironic-compute without neutron

  neutron-server.log: http://paste.openstack.org/show/445388/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491949] Re: gate-tempest-dsvm-large-ops fails to deallocate instance network due to rpc timeout

2015-09-04 Thread Matt Riedemann
This devstack change to turn on multihost=true is what triggered the
bug:

https://review.openstack.org/#/c/218860/

The revert is here:

https://review.openstack.org/#/c/220525/

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
   Status: New => In Progress

** Changed in: devstack
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491949

Title:
  gate-tempest-dsvm-large-ops fails to deallocate instance network due
  to rpc timeout

Status in devstack:
  In Progress
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  http://logs.openstack.org/96/219696/4/check/gate-tempest-dsvm-large-
  ops/158f061/logs/screen-n-cpu-1.txt.gz?level=TRACE

  2015-09-03 15:11:10.090 ERROR nova.compute.manager 
[req-ae96c425-a199-472f-a0db-e1b48147bb4c 
tempest-TestLargeOpsScenario-1690771764 
tempest-TestLargeOpsScenario-1474206998] [instance: 
195361d7-95c3-4740-825b-1acab707969e] Failed to deallocate network for instance.
  2015-09-03 15:11:11.051 ERROR nova.compute.manager 
[req-ae96c425-a199-472f-a0db-e1b48147bb4c 
tempest-TestLargeOpsScenario-1690771764 
tempest-TestLargeOpsScenario-1474206998] [instance: 
195361d7-95c3-4740-825b-1acab707969e] Setting instance vm_state to ERROR
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] Traceback (most recent call last):
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2396, in 
do_terminate_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._delete_instance(context, 
instance, bdms, quotas)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/hooks.py", line 149, in inner
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] rv = f(*args, **kwargs)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2375, in _delete_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] quotas.rollback()
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] six.reraise(self.type_, self.value, 
self.tb)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2338, in _delete_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._shutdown_instance(context, 
instance, bdms)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2265, in _shutdown_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._try_deallocate_network(context, 
instance, requested_networks)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2194, in 
_try_deallocate_network
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] 
self._set_instance_obj_error_state(context, instance)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] six.reraise(self.type_, self.value, 
self.tb)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2189, in 
_try_deallocate_network
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._deallocate_network(context, 
instance, requested_networks)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 1812, in _deallocate_network
  

[Yahoo-eng-team] [Bug 1492264] [NEW] Updating the security group rules does not reflected in the applicable running instances

2015-09-04 Thread Murugan234
Public bug reported:

Hi,

Open Stack Version : Kilo

Problem :   


A instance has been created with the security group- Sample_Group and
it's running as per the rules in the security group. While
modify/updating the rules in the group doesn't reflected in the running
instances.

Query : 
==

Is it possible to update/modify the security rule for running instance
without adding any new group to that instance?


Step/Terminal Output :


[root@centos7-openstack keystone]# nova secgroup-list-rules Sample_Group
+-+---+-++--+
| IP Protocol | From Port | To Port | IP Range   | Source Group |
+-+---+-++--+
| tcp | 22| 22  | 203.0.113.0/24 |  |
| icmp| -1| -1  | 203.0.113.0/24 |  |
+-+---+-++--+


[root@centos7-openstack keystone]# nova boot --flavor m1.tiny --image 
cirros-0.3.4-x86_64 --nic net-id=d0902d54-e00d-4c54-a4a0-9a63c8102039 
--security-group Sample_Group --key-name demo-key demo-instance3
+--++
| Property | Value  
|
+--++
| OS-DCF:diskConfig| MANUAL 
|
| OS-EXT-AZ:availability_zone  | nova   
|
| OS-EXT-SRV-ATTR:host | -  
|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -  
|
| OS-EXT-SRV-ATTR:instance_name| instance-000a  
|
| OS-EXT-STS:power_state   | 0  
|
| OS-EXT-STS:task_state| scheduling 
|
| OS-EXT-STS:vm_state  | building   
|
| OS-SRV-USG:launched_at   | -  
|
| OS-SRV-USG:terminated_at | -  
|
| accessIPv4   |
|
| accessIPv6   |
|
| adminPass| fmHZXR638udt   
|
| config_drive |
|
| created  | 2015-09-04T12:53:12Z   
|
| flavor   | m1.tiny (1)
|
| hostId   |
|
| id   | 92623f86-600c-4a3e-bdcb-b308bd1747de   
|
| image| cirros-0.3.4-x86_64 
(44fc5cb7-62ea-4ced-95fe-cabaedcf583d) |
| key_name | demo-key   
|
| metadata | {} 
|
| name | demo-instance3 
|
| os-extended-volumes:volumes_attached | [] 
|
| progress | 0  
|
| security_groups  | Sample_Group   
|
| status   | BUILD  
|
| tenant_id| e91aeb7cdcf1410e9a70be9a4003c5d9   
|
| updated  | 2015-09-04T12:53:12Z   
|
| user_id  | 6ea371c469ee41b7adcff4b7c5a9c211   
|
+--++


[root@centos7-openstack keystone]# nova list
+--++++-+---+
| ID   | Name   | Status | Task State | 
Power State | Networks  |
+--++++-+---+
| 

[Yahoo-eng-team] [Bug 1492283] [NEW] Add ability to use custom config in DHCP-agent

2015-09-04 Thread Sergey Belous
Public bug reported:

Currently, dhcp-agent is hardcoded to use global oslo.config's CONF to
get and register options. Adding an ability to pass the config as an
argument will make dhcp-agent more flexible and reduce code coupling.

** Affects: neutron
 Importance: Undecided
 Assignee: Sergey Belous (sbelous)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Sergey Belous (sbelous)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492283

Title:
  Add ability to use custom config in DHCP-agent

Status in neutron:
  New

Bug description:
  Currently, dhcp-agent is hardcoded to use global oslo.config's CONF to
  get and register options. Adding an ability to pass the config as an
  argument will make dhcp-agent more flexible and reduce code coupling.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492283/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416813] Re: default security group table's name is in singular format

2015-09-04 Thread Ann Kamyshnikova
** Changed in: neutron
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1416813

Title:
  default security group table's name is in singular format

Status in neutron:
  Opinion

Bug description:
  In general, the tables' name is in plular format, but the default
  security group table's name is singular one

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1416813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491131] Re: Ipset race condition - Kilo-2015.1.0

2015-09-04 Thread Kyle Mestery
Changed description and added Kilo series.

** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

** Summary changed:

- Ipset race condition - Kilo-2015.1.0
+ Ipset race condition

** Changed in: neutron/kilo
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491131

Title:
  Ipset race condition

Status in neutron:
  New
Status in neutron kilo series:
  New

Bug description:
  Hello,

  We have been using ipsets in neutron since juno.  We have upgraded our
  install to kilo a month or so and we have experienced 3 issues with
  ipsets.

  The issues are as follows:
  1.) Iptables attempts to apply rules for an ipset that was not added
  2.) iptables attempt to apply rules for an ipset that was removed, but still 
refrenced in the iptables config
  3.) ipset churns trying to remove an ipset that has already been removed.

  For issue one and two I am unable to get the logs for these issues
  because neutron was dumping the full iptables-restore entries to log
  once every second for a few hours and eventually filled up the disk
  and we removed the file to get things working again.

  For issue 3.) I have the start of the logs here:
  2015-08-31 12:17:00.100 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
29355e52-bae1-44b2-ace6-5bc7ce497d32 not present in bridge br-int
  2015-08-31 12:17:00.101 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.101 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'2aa0f79d-4983-4c7a-b489-e0612c482e36']
  2015-08-31 12:17:00.861 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:00.862 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.862 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:01.499 4581 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.500 6840 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.608 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:01.609 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:01.609 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:02.358 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:02.359 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:02.359 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.108 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:03.109 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.109 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.855 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
fddff586-9903-47ad-92e1-b334e02e9d1c not present in bridge br-int
  2015-08-31 12:17:03.855 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.856 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'3f706749-f8bb-41ab-aa4c-a0925dc67bd4']
  

[Yahoo-eng-team] [Bug 1492342] [NEW] ComputeNode pci_device_pools field changes from empty PciDevicePoolList to None after save()

2015-09-04 Thread Taylor Peoples
Public bug reported:

If a ComputeNode object has its pci_device_pools field set to an empty
PciDevicePoolList object and then that node object is saved via the
save() method, the pci_device_pools field of the same object is changed
to None.  This due to the following flow:

nova.objects.compute_node.ComputeNode.save()
nova.objects.compute_node.ComputeNode._from_db_object()
nova.objects.pci_device_pool.from_pci_stats()

from_pci_stats() returns None instead of an empty PciDevicePoolList as I
would have expected.  This can cause comparisons of a node object to
fail after doing a save() because this field changes.  See the script
below for an example.

"""
#!/usr/bin/python
import copy
import sys

from nova import config
from nova import context as ctxt
from nova.objects import base
from nova.objects import compute_node
from nova.objects import hv_spec
from oslo_config import cfg
from oslo_log import log as logging
from nova.objects import pci_device_pool

CONF = cfg.CONF
config.parse_args(sys.argv[0:1])

logging.setup(cfg.CONF, 'nova')
LOG = logging.getLogger(__name__)

context = ctxt.get_admin_context()
node = compute_node.ComputeNodeList.get_all(context)[0]
node.pci_device_pools = pci_device_pool.PciDevicePoolList([])
node_before_save = copy.deepcopy(node)

LOG.info('node.pci_device_pools before save: %s' % node.pci_device_pools)
node.save()
LOG.info('node.pci_device_pools after save: %s' % node.pci_device_pools)

LOG.info('base.obj_equal_prims(node_before_save, node, ["updated_at"]) = %s' %
 base.obj_equal_prims(node_before_save, node, ['updated_at']))
"""

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492342

Title:
  ComputeNode pci_device_pools field changes from empty
  PciDevicePoolList to None after save()

Status in OpenStack Compute (nova):
  New

Bug description:
  If a ComputeNode object has its pci_device_pools field set to an empty
  PciDevicePoolList object and then that node object is saved via the
  save() method, the pci_device_pools field of the same object is
  changed to None.  This due to the following flow:

  nova.objects.compute_node.ComputeNode.save()
  nova.objects.compute_node.ComputeNode._from_db_object()
  nova.objects.pci_device_pool.from_pci_stats()

  from_pci_stats() returns None instead of an empty PciDevicePoolList as
  I would have expected.  This can cause comparisons of a node object to
  fail after doing a save() because this field changes.  See the script
  below for an example.

  """
  #!/usr/bin/python
  import copy
  import sys

  from nova import config
  from nova import context as ctxt
  from nova.objects import base
  from nova.objects import compute_node
  from nova.objects import hv_spec
  from oslo_config import cfg
  from oslo_log import log as logging
  from nova.objects import pci_device_pool

  CONF = cfg.CONF
  config.parse_args(sys.argv[0:1])

  logging.setup(cfg.CONF, 'nova')
  LOG = logging.getLogger(__name__)

  context = ctxt.get_admin_context()
  node = compute_node.ComputeNodeList.get_all(context)[0]
  node.pci_device_pools = pci_device_pool.PciDevicePoolList([])
  node_before_save = copy.deepcopy(node)

  LOG.info('node.pci_device_pools before save: %s' % node.pci_device_pools)
  node.save()
  LOG.info('node.pci_device_pools after save: %s' % node.pci_device_pools)

  LOG.info('base.obj_equal_prims(node_before_save, node, ["updated_at"]) = %s' %
   base.obj_equal_prims(node_before_save, node, ['updated_at']))
  """

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492270] Re: SortedDict class is deprecated since Django 1.7 and will be removed in 1.9

2015-09-04 Thread Timur Sufiev
Okay, there is an opinion that a blueprint is a too loud word for such a
change.

** Changed in: horizon
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492270

Title:
  SortedDict class is deprecated since Django 1.7 and will be removed in
  1.9

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  Thus django.utils.datastructures.SortedDict should be replaced with
  collections.OrderedDict.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491949] Re: gate-tempest-dsvm-large-ops fails to deallocate instance network due to rpc timeout

2015-09-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/220525
Committed: 
https://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=975243189216561f66ca91520495e0c6e2f747e2
Submitter: Jenkins
Branch:master

commit 975243189216561f66ca91520495e0c6e2f747e2
Author: Matt Riedemann 
Date:   Fri Sep 4 14:15:27 2015 +

Revert "turn multi host true for nova network by default"

This reverts commit 2e1a91c50b73ca7f46871d3a906ade93bbcac6a7

It looks like this introduced race bug 1491949 in the
gate-tempest-dsvm-large-ops job causing rpc timeouts when
deallocating network information for an instance,
specifically around the dnsmasq callback to release the
fixed IP that the instance was using which triggers the
disassociation between the fixed IP and the instance in the
nova database.

Change-Id: I163cdeea75e92485f241647c69aea0d7456c3258
Closes-Bug: #1491949


** Changed in: devstack
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491949

Title:
  gate-tempest-dsvm-large-ops fails to deallocate instance network due
  to rpc timeout

Status in devstack:
  Fix Released
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  http://logs.openstack.org/96/219696/4/check/gate-tempest-dsvm-large-
  ops/158f061/logs/screen-n-cpu-1.txt.gz?level=TRACE

  2015-09-03 15:11:10.090 ERROR nova.compute.manager 
[req-ae96c425-a199-472f-a0db-e1b48147bb4c 
tempest-TestLargeOpsScenario-1690771764 
tempest-TestLargeOpsScenario-1474206998] [instance: 
195361d7-95c3-4740-825b-1acab707969e] Failed to deallocate network for instance.
  2015-09-03 15:11:11.051 ERROR nova.compute.manager 
[req-ae96c425-a199-472f-a0db-e1b48147bb4c 
tempest-TestLargeOpsScenario-1690771764 
tempest-TestLargeOpsScenario-1474206998] [instance: 
195361d7-95c3-4740-825b-1acab707969e] Setting instance vm_state to ERROR
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] Traceback (most recent call last):
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2396, in 
do_terminate_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._delete_instance(context, 
instance, bdms, quotas)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/hooks.py", line 149, in inner
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] rv = f(*args, **kwargs)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2375, in _delete_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] quotas.rollback()
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] six.reraise(self.type_, self.value, 
self.tb)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2338, in _delete_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._shutdown_instance(context, 
instance, bdms)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2265, in _shutdown_instance
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] self._try_deallocate_network(context, 
instance, requested_networks)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2194, in 
_try_deallocate_network
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] 
self._set_instance_obj_error_state(context, instance)
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-03 15:11:11.051 22635 ERROR nova.compute.manager [instance: 
195361d7-95c3-4740-825b-1acab707969e] 

[Yahoo-eng-team] [Bug 1492345] [NEW] Unable to perform nova-api operations on instance

2015-09-04 Thread Ioana-Madalina Patrichi
Public bug reported:

Openstack version: Kilo
Nova version:  1:2015.1.0-0ubuntu1.1

Every time I try to perform an operation on an instance, the compute
node reports the following error:

2015-09-04 15:35:48.529 42883 INFO nova.compute.manager 
[req-b750c91a-ea2a-425c-832d-906e3c452904 c39a72988ef2478a930e627caa7f706a 
2ff06b822bab4d59a6f0bc81be34980f - - -] [instance: 
6a97676c-f735-4a93-b787-6d9b4c367836] Rebooting instance
2015-09-04 15:35:48.847 42883 INFO nova.scheduler.client.report 
[req-b750c91a-ea2a-425c-832d-906e3c452904 c39a72988ef2478a930e627caa7f706a 
2ff06b822bab4d59a6f0bc81be34980f - - -] Compute_service record updated for 
('ncn11', 'ncn11.hpscto.local')
2015-09-04 15:35:48.848 42883 ERROR oslo_messaging.rpc.dispatcher 
[req-b750c91a-ea2a-425c-832d-906e3c452904 c39a72988ef2478a930e627caa7f706a 
2ff06b822bab4d59a6f0bc81be34980f - - -] Exception during message handling: 
Cannot call obj_load_attr on orphaned Instance object
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 142, 
in _dispatch_and_reply
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 186, 
in _dispatch
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 130, 
in _do_dispatch
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6695, in 
reboot_instance
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
reboot_type)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 88, in wrapped
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher payload)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 71, in wrapped
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 327, in 
decorated_function
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
LOG.warning(msg, e, instance_uuid=instance_uuid)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 298, in 
decorated_function
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 377, in 
decorated_function
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 355, in 
decorated_function
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 85, in __exit__
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 343, in 
decorated_function
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-09-04 15:35:48.848 42883 TRACE oslo_messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 3106, in 
reboot_instance
2015-09-04 

[Yahoo-eng-team] [Bug 1492426] [NEW] magic search text cancel

2015-09-04 Thread Kevin Fox
Public bug reported:

In magic search, if you add a facet, then some text, then clear the
text, the facets all get cleared too, and no searchUpdated event happens
on the cleared facets.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492426

Title:
  magic search text cancel

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In magic search, if you add a facet, then some text, then clear the
  text, the facets all get cleared too, and no searchUpdated event
  happens on the cleared facets.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492242] Re: novaclient is unable to update quotas via Nova V2.1

2015-09-04 Thread Doug Hellmann
** Changed in: python-novaclient
   Status: Fix Committed => Fix Released

** Changed in: python-novaclient
Milestone: None => 2.28.1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492242

Title:
  novaclient is unable to update quotas via Nova V2.1

Status in OpenStack Compute (nova):
  New
Status in python-novaclient:
  Fix Released

Bug description:
  python-novaclient transmits always tenant-id in requests body for
  quotas update, but Nova V2.1 (nova v2 is ok with it) doesn't accept
  this parameter and fails.

  ERROR (BadRequest): Invalid input for field/attribute quota_set.
  Value: {u'tenant_id': u'582df899eabc47018c96713c2f7196ba',
  u'security_groups': 15}. Additional properties are not allowed
  (u'tenant_id' was unexpected) (HTTP 400) (Request-ID: req-8bbb5dda-
  c6f2-4126-b88e-c3949a85f8ff)

  Found in rally gates: http://logs.openstack.org/29/184629/19/check
  /gate-rally-dsvm-rally-nova/02014e2/rally-
  plot/results.html.gz#/Quotas.nova_update/failures

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452298] Re: Fails to filter domains by id

2015-09-04 Thread Doug Hellmann
** Changed in: python-keystoneclient
   Status: Fix Committed => Fix Released

** Changed in: python-keystoneclient
Milestone: None => 1.7.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1452298

Title:
  Fails to filter domains by id

Status in Keystone:
  Invalid
Status in python-keystoneclient:
  Fix Released

Bug description:
  V3 client fails to filter domains by id. Following code should list
  only 'default' domain, but list of all domains is returned instead:

  >>> import keystoneclient.v3.client as ksclient_v3
  >>> client = ksclient_v3.Client(endpoint='http://192.0.2.5:35357/v3', 
token='153c6ee5a6486e7db131ada9a464ab0f12f3f4cb')
  >>> default_domain = client.domains.list(id='default')[0]
  >>> default_domain
  http://192.0.2.5:35357/v3/domains/29f4f3f567f943eb9769329352753b89'}, 
name=heat_stack>
  >>> client.domains.list(id='default')
  [http://192.0.2.5:35357/v3/domains/29f4f3f567f943eb9769329352753b89'}, 
name=heat_stack>, http://192.0.2.5:35357/v3/domains/default'}, name=Default>]

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1452298/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492422] [NEW] hz-magic-search-bar doesn't use a class

2015-09-04 Thread Kevin Fox
Public bug reported:

The css for the magic search bar doesn't use a class but hard codes the
element name. This means it can not be reused for the app-catalog-ui. We
will have to fork the whole file into the app-catalog-ui, or just change
the css file to use a class in horizon so we can share.

** Affects: horizon
 Importance: Undecided
 Assignee: Kevin Fox (kevpn)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492422

Title:
  hz-magic-search-bar doesn't use a class

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The css for the magic search bar doesn't use a class but hard codes
  the element name. This means it can not be reused for the app-catalog-
  ui. We will have to fork the whole file into the app-catalog-ui, or
  just change the css file to use a class in horizon so we can share.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492422/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469867] Re: Stop using deprecated oslo_utils.timeutils.strtime

2015-09-04 Thread Doug Hellmann
** Changed in: python-keystoneclient
   Status: Fix Committed => Fix Released

** Changed in: python-keystoneclient
Milestone: None => 1.7.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1469867

Title:
  Stop using deprecated oslo_utils.timeutils.strtime

Status in Keystone:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in python-keystoneclient:
  Fix Released

Bug description:
  
  Keystone unit tests are failing because they're still using the deprecated 
oslo_utils.timeutils.strtime function. We need to stop using the function.

  DeprecationWarning: Using function/method
  'oslo_utils.timeutils.strtime()' is deprecated in version '1.6' and
  will be removed in a future version: use either
  datetime.datetime.isoformat() or datetime.datetime.strftime() instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1469867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461251] Re: Stop using deprecated oslo_utils.timeutils.isotime

2015-09-04 Thread Doug Hellmann
** Changed in: python-keystoneclient
   Status: Fix Committed => Fix Released

** Changed in: python-keystoneclient
Milestone: None => 1.7.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1461251

Title:
  Stop using deprecated oslo_utils.timeutils.isotime

Status in Keystone:
  Fix Released
Status in oslo.utils:
  New
Status in python-keystoneclient:
  Fix Released

Bug description:
  oslo_utils.timeutils.isotime() is deprecated as of 1.6 so we need to
  stop using it.

  This breaks unit tests in keystone since we've got a check for calling
  deprecated functions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1461251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492450] [NEW] Error when attaching volume to instance

2015-09-04 Thread Richard Hagarty
Public bug reported:

After Nova patch https://review.openstack.org/#/c/219696 landed, Horizon
users can no longer attach a volume to an instance.

The Nova API requires a non blank "device" value, or null. Horizon is
passing a blank string, which results in the following exception:

Recoverable error: Invalid input for field/attribute device. Value: .
u'' does not match '(^/dev/x{0,1}[a-z]{0,1}d{0,1})([a-z]+)[0-9]*$' (HTTP
400)

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1492450

Title:
  Error when attaching volume to instance

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After Nova patch https://review.openstack.org/#/c/219696 landed,
  Horizon users can no longer attach a volume to an instance.

  The Nova API requires a non blank "device" value, or null. Horizon is
  passing a blank string, which results in the following exception:

  Recoverable error: Invalid input for field/attribute device. Value: .
  u'' does not match '(^/dev/x{0,1}[a-z]{0,1}d{0,1})([a-z]+)[0-9]*$'
  (HTTP 400)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1492450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492451] [NEW] installing devstack with rally plugin fails

2015-09-04 Thread Madhusudhan Kandadai
Public bug reported:

installing devstack with rally plugin fails.

How to reproduce?

1. git clone https://github.com/openstack-dev/devstack.git
2. copy the local.conf into devstack directory and add the below line in 
local.conf
enable_service rally
3. ./stack.sh

  
2015-09-04 19:08:47.749 | 2015-09-04 12:08:47.748 INFO migrate.versioning.api 
[-] done
2015-09-04 19:08:47.896 | + create_nova_cache_dir
2015-09-04 19:08:47.896 | + sudo install -d -o devstack /var/cache/nova
2015-09-04 19:08:47.912 | + rm -f '/var/cache/nova/*'
2015-09-04 19:08:47.913 | + create_nova_keys_dir
2015-09-04 19:08:47.914 | + sudo install -d -o devstack /opt/stack/data/nova 
/opt/stack/data/nova/keys
2015-09-04 19:08:47.928 | + [[ '' == \L\V\M ]]
2015-09-04 19:08:47.929 | + is_service_enabled neutron
2015-09-04 19:08:47.936 | + return 0
2015-09-04 19:08:47.936 | + create_nova_conf_neutron
2015-09-04 19:08:47.936 | + iniset /etc/nova/nova.conf DEFAULT 
network_api_class nova.network.neutronv2.api.API
2015-09-04 19:08:47.959 | + '[' True == False ']'
2015-09-04 19:08:47.959 | + iniset /etc/nova/nova.conf neutron admin_username 
neutron
2015-09-04 19:08:47.980 | + iniset /etc/nova/nova.conf neutron admin_password 
password
2015-09-04 19:08:47.996 | + iniset /etc/nova/nova.conf neutron admin_auth_url 
http://192.168.29.130:35357/v2.0
2015-09-04 19:08:48.021 | + iniset /etc/nova/nova.conf neutron 
admin_tenant_name service
2015-09-04 19:08:48.070 | + iniset /etc/nova/nova.conf neutron auth_strategy 
keystone
2015-09-04 19:08:48.096 | + iniset /etc/nova/nova.conf neutron region_name 
RegionOne
2015-09-04 19:08:48.113 | + iniset /etc/nova/nova.conf neutron url 
http://192.168.29.130:9696
2015-09-04 19:08:48.139 | + [[ True == \T\r\u\e ]]
2015-09-04 19:08:48.140 | + 
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver
2015-09-04 19:08:48.140 | + iniset /etc/nova/nova.conf DEFAULT firewall_driver 
nova.virt.firewall.NoopFirewallDriver
2015-09-04 19:08:48.163 | + iniset /etc/nova/nova.conf DEFAULT 
security_group_api neutron
2015-09-04 19:08:48.183 | + neutron_plugin_create_nova_conf
2015-09-04 19:08:48.183 | + _neutron_ovs_base_configure_nova_vif_driver
2015-09-04 19:08:48.183 | + :
2015-09-04 19:08:48.183 | + '[' libvirt == xenserver ']'
2015-09-04 19:08:48.183 | + iniset /etc/nova/nova.conf DEFAULT 
linuxnet_interface_driver ''
2015-09-04 19:08:48.206 | + is_service_enabled q-meta
2015-09-04 19:08:48.214 | + return 0
2015-09-04 19:08:48.215 | + iniset /etc/nova/nova.conf neutron 
service_metadata_proxy True
2015-09-04 19:08:48.233 | + iniset /etc/nova/nova.conf DEFAULT 
vif_plugging_is_fatal True
2015-09-04 19:08:48.254 | + iniset /etc/nova/nova.conf DEFAULT 
vif_plugging_timeout 300
2015-09-04 19:08:48.273 | + init_nova_cells
2015-09-04 19:08:48.273 | + is_service_enabled n-cell
2015-09-04 19:08:48.281 | + return 1
2015-09-04 19:08:48.281 | + run_phase stack post-config
2015-09-04 19:08:48.281 | + local mode=stack
2015-09-04 19:08:48.281 | + local phase=post-config
2015-09-04 19:08:48.281 | + [[ -d /home/devstack/devstack/extras.d ]]
2015-09-04 19:08:48.281 | + for i in '$TOP_DIR/extras.d/*.sh'
2015-09-04 19:08:48.281 | + [[ -r /home/devstack/devstack/extras.d/50-ironic.sh 
]]
2015-09-04 19:08:48.281 | + source 
/home/devstack/devstack/extras.d/50-ironic.sh stack post-config
2015-09-04 19:08:48.281 | ++ is_service_enabled ir-api ir-cond
2015-09-04 19:08:48.287 | ++ return 1
2015-09-04 19:08:48.287 | + for i in '$TOP_DIR/extras.d/*.sh'
2015-09-04 19:08:48.287 | + [[ -r /home/devstack/devstack/extras.d/60-ceph.sh ]]
2015-09-04 19:08:48.288 | + source /home/devstack/devstack/extras.d/60-ceph.sh 
stack post-config
2015-09-04 19:08:48.288 | ++ is_service_enabled ceph
2015-09-04 19:08:48.297 | ++ return 1
2015-09-04 19:08:48.297 | + for i in '$TOP_DIR/extras.d/*.sh'
2015-09-04 19:08:48.297 | + [[ -r /home/devstack/devstack/extras.d/70-rally.sh 
]]
2015-09-04 19:08:48.297 | + source /home/devstack/devstack/extras.d/70-rally.sh 
stack post-config
2015-09-04 19:08:48.298 | ++ is_service_enabled rally
2015-09-04 19:08:48.305 | ++ return 0
2015-09-04 19:08:48.305 | ++ [[ stack == \s\o\u\r\c\e ]]
2015-09-04 19:08:48.305 | ++ [[ stack == \s\t\a\c\k ]]
2015-09-04 19:08:48.305 | ++ [[ post-config == \i\n\s\t\a\l\l ]]
2015-09-04 19:08:48.305 | ++ [[ stack == \s\t\a\c\k ]]
2015-09-04 19:08:48.305 | ++ [[ post-config == \p\o\s\t\-\c\o\n\f\i\g ]]
2015-09-04 19:08:48.305 | ++ echo_summary 'Configuring Rally'
2015-09-04 19:08:48.305 | ++ [[ -t 3 ]]
2015-09-04 19:08:48.305 | ++ [[ True != \T\r\u\e ]]
2015-09-04 19:08:48.306 | ++ echo -e Configuring Rally
2015-09-04 19:08:48.306 | ++ configure_rally
2015-09-04 19:08:48.306 | ++ [[ ! -d /etc/rally ]]
2015-09-04 19:08:48.306 | ++ sudo chown devstack /etc/rally
2015-09-04 19:08:48.318 | ++ cp /opt/stack/rally/etc/rally/rally.conf.sample 
/etc/rally/rally.conf
2015-09-04 19:08:48.321 | ++ iniset /etc/rally/rally.conf DEFAULT debug False
2015-09-04 19:08:48.348 | +++ database_connection_url rally

[Yahoo-eng-team] [Bug 1492456] [NEW] cProfile - fix Security Groups hotfunctions

2015-09-04 Thread Ramu Ramamurthy
Public bug reported:

I used cProfile to profile neutron-ovs-agent (from neutron kilo
2015.1.0) as VMs are provisioned (see code sample below to reproduce).

I find a couple of functions in the IptablesManager  scaling poorly with # of 
VMs (_modify_rules, and its callee find_last_entry).
As the # of current VMs doubles, the time spent in these functions to provision 
10 new VMs also roughly doubles,
While we wait for the new IptablesTables firewall driver:
https://blueprints.launchpad.net/neutron/+spec/new-iptables-driver
Can we improve the performance of the current iptables firewall on those 2 
functions which do a lot of string processing ?

Current: #VMs: 20, # SG rules: 657,
provision 10 new VMs

   600.1430.0023.9790.066 
iptables_manager.py:511(_modify_rules)
259892.7520.0003.3320.000 
iptables_manager.py:504(_find_last_entry)

Current #VMs: 40, # SG rules: 1277 ,
provision 10 new VMs

   650.2200.0037.9740.123  
iptables_manager.py:511(_modify_rules)
388915.7820.0006.9860.000 
iptables_manager.py:504(_find_last_entry)

Current #VMs: 80, # SG rules:  2517 ,
provision 10 new VMs

   300.2740.009   20.4960.683
iptables_manager.py:511(_modify_rules)
43862   15.9200.000   19.2920.000 
iptables_manager.py:504(_find_last_entry)

current #VMs: 160, #SG rules:  4997,
provision 10 new VMs

   200.3750.019   49.2552.463   
iptables_manager.py:511(_modify_rules)
56478   39.2750.001   47.6290.001 
iptables_manager.py:504(_find_last_entry)

To Reproduce:
---
Make following change to neutron_ovs_agent.py to enable/disable cProfile for a 
given scenario.

< import cProfile
< import os.path
< pr_enabled = False
< pr = None

In OVSNeutronAgent add method:

< def toggle_cprofile(self):
< global pr, pr_enabled
< start = False
< data = ""
< fname = "vm.profile"
< try:
< if os.path.isfile("/tmp/cprof"):
< start = True
< except IOError as e:
< LOG.warn("Error %s", e.strerror)
<
< if start and  not pr_enabled:
< pr = cProfile.Profile()
< pr.enable()
< pr_enabled = True
< LOG.warn("enabled cprofile")
< if not start and pr_enabled:
< pr.disable()
< pr.create_stats()
< pr.dump_stats("/tmp/%s"%fname)
< pr_enabled = False
< LOG.warn("disabled cprofile")

In polling loop:
< self.toggle_cprofile()

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

+ I used cProfile to profile neutron-ovs-agent (from neutron 2015.1.0) as
+ VMs are provisioned (see code sample below to reproduce).
  
- I used cProfile to profile neutron-ovs-agent (from neutron 2015.1.0) as VMs 
are provisioned (see code sample below to reproduce). 
- 
- I find a couple of functions in the IptablesManager  scaling poorly with # of 
VMs (_modify_rules, and its callee find_last_entry). 
+ I find a couple of functions in the IptablesManager  scaling poorly with # of 
VMs (_modify_rules, and its callee find_last_entry).
  As the # of current VMs doubles, the time spent in these functions to 
provision 10 new VMs also roughly doubles,
  While we wait for the new IptablesTables firewall driver:
  https://blueprints.launchpad.net/neutron/+spec/new-iptables-driver
- Can we improve the performance of the current iptables firewall driver to 
perform better ?
+ Can we improve the performance of the current iptables firewall on those 2 
functions which do a lot of string processing ?
  
- 
- Current: #VMs: 20, # SG rules: 657, 
+ Current: #VMs: 20, # SG rules: 657,
  provision 10 new VMs
  
-600.1430.0023.9790.066 
iptables_manager.py:511(_modify_rules)
- 259892.7520.0003.3320.000 
iptables_manager.py:504(_find_last_entry)
+    600.1430.0023.9790.066 
iptables_manager.py:511(_modify_rules)
+ 259892.7520.0003.3320.000 
iptables_manager.py:504(_find_last_entry)
  
- Current #VMs: 40, # SG rules: 1277 , 
+ Current #VMs: 40, # SG rules: 1277 ,
  provision 10 new VMs
  
-650.2200.0037.9740.123  
iptables_manager.py:511(_modify_rules)
- 388915.7820.0006.9860.000 
iptables_manager.py:504(_find_last_entry)
+    650.2200.0037.9740.123  
iptables_manager.py:511(_modify_rules)
+ 388915.7820.0006.9860.000 
iptables_manager.py:504(_find_last_entry)
  
- Current #VMs: 80, # SG rules:  2517 , 
+ Current #VMs: 80, # SG rules:  2517 ,
  provision 10 new VMs
  
-300.2740.009   20.4960.683
iptables_manager.py:511(_modify_rules)
- 43862   15.9200.000   19.2920.000 
iptables_manager.py:504(_find_last_entry)
+    300.2740.009   20.496

[Yahoo-eng-team] [Bug 1492242] Re: novaclient is unable to update quotas via Nova V2.1

2015-09-04 Thread Davanum Srinivas (DIMS)
Fixed in novaclient.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492242

Title:
  novaclient is unable to update quotas via Nova V2.1

Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  Fix Released

Bug description:
  python-novaclient transmits always tenant-id in requests body for
  quotas update, but Nova V2.1 (nova v2 is ok with it) doesn't accept
  this parameter and fails.

  ERROR (BadRequest): Invalid input for field/attribute quota_set.
  Value: {u'tenant_id': u'582df899eabc47018c96713c2f7196ba',
  u'security_groups': 15}. Additional properties are not allowed
  (u'tenant_id' was unexpected) (HTTP 400) (Request-ID: req-8bbb5dda-
  c6f2-4126-b88e-c3949a85f8ff)

  Found in rally gates: http://logs.openstack.org/29/184629/19/check
  /gate-rally-dsvm-rally-nova/02014e2/rally-
  plot/results.html.gz#/Quotas.nova_update/failures

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492505] [NEW] py34 intermittent failure

2015-09-04 Thread Armando Migliaccio
Public bug reported:

An instance here:

http://logs.openstack.org/56/220656/1/gate/gate-neutron-
python34/e2c4460/testr_results.html.gz

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiUGFyc2VyIEVycm9yOiB7e3tCYWQgY2hlY2tzdW0gLSBjYWxjdWxhdGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDE0MTM2MjA4MTV9

message:"Parser Error: {{{Bad checksum - calculated"

This has been observed in other py34 jobs

** Affects: neutron
 Importance: High
 Assignee: Cedric Brandily (cbrandily)
 Status: Confirmed

** Affects: oslo.messaging
 Importance: Undecided
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
Milestone: None => liberty-rc1

** Changed in: neutron
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492505

Title:
  py34 intermittent failure

Status in neutron:
  Confirmed
Status in oslo.messaging:
  New

Bug description:
  An instance here:

  http://logs.openstack.org/56/220656/1/gate/gate-neutron-
  python34/e2c4460/testr_results.html.gz

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiUGFyc2VyIEVycm9yOiB7e3tCYWQgY2hlY2tzdW0gLSBjYWxjdWxhdGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDE0MTM2MjA4MTV9

  message:"Parser Error: {{{Bad checksum - calculated"

  This has been observed in other py34 jobs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492505] Re: py34 intermittent failure

2015-09-04 Thread Armando Migliaccio
@Cedric: can you please triage this? Many thanks.

** Description changed:

  An instance here:
  
  http://logs.openstack.org/56/220656/1/gate/gate-neutron-
  python34/e2c4460/testr_results.html.gz
+ 
+ 
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiUGFyc2VyIEVycm9yOiB7e3tCYWQgY2hlY2tzdW0gLSBjYWxjdWxhdGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDE0MTM2MjA4MTV9
+ 
+ message:"Parser Error: {{{Bad checksum - calculated"
+ 
+ This has been observed in other py34 jobs

** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1492505

Title:
  py34 intermittent failure

Status in neutron:
  Confirmed
Status in oslo.messaging:
  New

Bug description:
  An instance here:

  http://logs.openstack.org/56/220656/1/gate/gate-neutron-
  python34/e2c4460/testr_results.html.gz

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiUGFyc2VyIEVycm9yOiB7e3tCYWQgY2hlY2tzdW0gLSBjYWxjdWxhdGVkXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0NDE0MTM2MjA4MTV9

  message:"Parser Error: {{{Bad checksum - calculated"

  This has been observed in other py34 jobs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1492505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1492120] [NEW] spelling mistake in comment in nova/network/manager.py

2015-09-04 Thread partys
Public bug reported:

Hi! I'm spell checking the nova .
In nova/network/manager.py

Detail is as followed

  # NOTE(vish): I strongy suspect the v6 subnet is not used
  # anywhere, but support it just in case
  # add the v6 address to the v6 subnet

The correct should be
  # NOTE(vish): I strongly suspect the v6 subnet is not used
  # anywhere, but support it just in case
  # add the v6 address to the v6 subnet

"strongy" is incorrect

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1492120

Title:
  spelling mistake  in comment in nova/network/manager.py

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi! I'm spell checking the nova .
  In nova/network/manager.py

  Detail is as followed

# NOTE(vish): I strongy suspect the v6 subnet is not used
# anywhere, but support it just in case
# add the v6 address to the v6 subnet

  The correct should be
# NOTE(vish): I strongly suspect the v6 subnet is not used
# anywhere, but support it just in case
# add the v6 address to the v6 subnet

  "strongy" is incorrect

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1492120/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482301] Re: 'X-Openstack-Request-ID' lenght limited only by header size

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

** Changed in: glance
Milestone: None => liberty-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1482301

Title:
  'X-Openstack-Request-ID' lenght limited only by header size

Status in Glance:
  Fix Released
Status in Glance juno series:
  New
Status in Glance kilo series:
  New
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Glance accepts 'X-Openstack-Request-ID' header and includes the value
  in log-files. The length of the Request ID is limited only by
  max_header_line parameter that defaults to 16384. This opens
  possibility to flood the logs.

  Public as this vulnerability was already discussed today on Glance
  weekly meeting.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1482301/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479257] Re: error message 's format in image_member.py is not correct

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1479257

Title:
  error message 's format in image_member.py is not correct

Status in Glance:
  Fix Released

Bug description:
  When users try to get member list with a public image in api v2. It
  will return an error:

  glance --os-image-api-version 2 member-list --image-id 

  
  403 Forbidden: (u'Error fetching members of image %(image_id)s: 
%(inner_msg)s', {'image_id': u'', 'inner_msg': u'Public images do not have 
members.'}) (HTTP 403)

  The error information's format is not correct. It should be:

  403 Forbidden: Error fetching members of image : Public images do
  not have members. (HTTP 403)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1479257/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483316] Re: FEATURE_BLACKLIST parameter is unused

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1483316

Title:
  FEATURE_BLACKLIST parameter is unused

Status in Glance:
  Fix Released

Bug description:
  
  glance/common/utils.py has a FEATURE_BLACKLIST parameter, but this is no 
longer referenced in the code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1483316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478690] Re: Request ID has a double req- at the start

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1478690

Title:
  Request ID has a double req- at the start

Status in Glance:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Committed

Bug description:
  ➜  vagrant git:(master) http http://192.168.121.242:9393/v1/search 
X-Auth-Token:$token query:='{"match_all" : {}}'
  HTTP/1.1 200 OK
  Content-Length: 138
  Content-Type: application/json; charset=UTF-8
  Date: Mon, 27 Jul 2015 20:21:31 GMT
  X-Openstack-Request-Id: req-req-0314bf5b-9c04-4bed-bf86-d2e76d297a34

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1478690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469869] Re: Metadata defintions in etc/metadefs are not included in Python packages

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1469869

Title:
  Metadata defintions in etc/metadefs are not included in Python
  packages

Status in Glance:
  Fix Released

Bug description:
  The files in etc/metadefs in the Glance repository are not included in
  either the tarball or wheel when one runs

  python setup.py sdist bdist_wheel

  These should be included by default and installed so they can be used.
  Since wheels should not be allowed to write to paths outside of the
  directory the package is installed in,

  glance-manage db_load_metadefs

  Should also look in the installed directory path for etc/metadefs when
  loading them.

  This is a problem in every version of Glance that was meant to include
  those metadata definitions. Since this does not prevent functionality
  from working (since a user could download the files to /etc/metadefs
  and run the command), I do not think this has backport potential.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1469869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480196] Re: Request-id is not getting returned if glance throws 500 error

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480196

Title:
  Request-id is not getting returned if glance throws 500 error

Status in Glance:
  Fix Released

Bug description:
  If glance throws Internal Server Error (500) for some reason,
  then in that case 'request-id' is not getting returned in response headers.

  Request-id is required to analyse logs effectively on failure and it should be
  returned from headers.

  For ex. -

  image-create api returns 500 error if property name exceeds 255 characters
  (fix for this issue is in progress : https://review.openstack.org/#/c/203948/)

  curl command:

  $ curl -g -i -X POST -H 'Accept-Encoding: gzip, deflate' -H 'x-image-
  meta-container_format: ami' -H 'x-image-meta-property-
  
:
  jskg' -H 'Accept: */*' -H 'X-Auth-Token:
  b94bd7b3a0fb4fada73fe170fe7d49cb' -H 'Connection: keep-alive' -H 'x
  -image-meta-is_public: None' -H 'User-Agent: python-glanceclient' -H
  'Content-Type: application/octet-stream' -H 'x-image-meta-disk_format:
  ami' http://10.69.4.173:9292/v1/images

  HTTP/1.1 500 Internal Server Error
  Content-Type: text/plain
  Content-Length: 0
  Date: Fri, 31 Jul 2015 08:27:31 GMT
  Connection: close

  Here request-id is not part of response header.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480196/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488096] Re: spelling mistake in test_images.py

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1488096

Title:
  spelling mistake in test_images.py

Status in Glance:
  Fix Released

Bug description:
  image is spelled wrong in tests/functional/v2/test_images.py on line
  1089 and 1131

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1488096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471048] Re: glance metadef resource-type-associate fails in postgresql

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1471048

Title:
  glance metadef resource-type-associate fails in postgresql

Status in Glance:
  Fix Released

Bug description:
  The following failure appears in the glance-api.log file when trying to run:
  glance --os-image-api-version 2 md-resource-type-associate --name 
name-of-resource name-of-namespace
  in postgresql (this error does NOT appear if running mysql or sqlite).

  DBAPIError exception wrapped from (ProgrammingError) column "protected" is of 
type boolean but expression is of type integer
  LINE 1: ...'2015-07-02T23:46:18.125563'::timestamp, 'myrt3', 0) RETURNING...
   ^
  HINT:  You will need to rewrite or cast the expression.
   'INSERT INTO metadef_resource_types (created_at, updated_at, name, 
protected) VALUES (%(created_at)s, %(updated_at)s, %(name)s, %(protected)s) 
RETURNING metadef_resource_types.id' {'created_at': datetime.datetime(2015, 7, 
2, 23, 46, 18, 125552), 'protected': 0, 'updated_at': datetime.datetime(2015, 
7, 2, 23, 46, 18, 125563), 'name': u'myrt3'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1471048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471912] Re: [OSSA 2015-014] Format-guessing and file disclosure via image conversion (CVE-2015-5163)

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1471912

Title:
  [OSSA 2015-014] Format-guessing and file disclosure via image
  conversion (CVE-2015-5163)

Status in Glance:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  This is a security flaw that allows files from the Glance host to be
  obtained by a user.

  I'm using the Glance file store and have set in /etc/glance/glance-api.conf:
  [taskflow_executor]
  engine_mode=serial  # not sure if needed
  conversion_format=raw

  Make a malicious image available via HTTP.
  $ sudo qemu-img create -f qcow2 /var/www/html/test_image 1M
  $ sudo qemu-img rebase -u -b /etc/passwd /var/www/html/test_image

  $ glance --os-image-api-version 2 task-create --type import --input 
'{"import_from_format": "qcow2", "import_from": "http://127.0.0.1/test_image;, 
"image_properties": {"name": "my_image_test", "disk_format": "qcow2", 
"container_format": "bare"}}'
  $ glance image-download my_image_test --file downloaded_image
  $ head downloaded_image
  

  This happens because Glance runs this command which doesn't specify a format, 
and uses qemu-img's format auto-detection:
  qemu-img convert -O raw file:///tmp/28e1f5e8-9f62-4c01-84be-9feae8852ea4 
/tmp/28e1f5e8-9f62-4c01-84be-9feae8852ea4.converted

  Similar to Cinder bug 1415087.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1471912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420008] Re: Owner change doesn't work in v2

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1420008

Title:
  Owner change doesn't work in v2

Status in Glance:
  Fix Released

Bug description:
  v2 is incompatible with v1 for owner change

  Need more test to confirm if it's still existing in master branch

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1420008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394605] Re: Don't use slashes for long lines - use parentheses instead

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1394605

Title:
  Don't use slashes for long lines - use parentheses instead

Status in Glance:
  Fix Released

Bug description:
  According to the OpenStack Style Guidelines - 
http://docs.openstack.org/developer/hacking/#general - it is prefered to wrap
  long lines in parentheses and not a backslash for line continuation. As
  we are following those guidelines in our project - we should replace 
backslash with parentheses for line continuation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1394605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1482144] Re: Incorrect variable name is declared in test_validate_key_cert_key_cant_read() method

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1482144

Title:
  Incorrect variable name is declared in
  test_validate_key_cert_key_cant_read() method

Status in Glance:
  Fix Released

Bug description:
  Incorrect variable name is declared in
  test_validate_key_cert_key_cant_read() method[1].

  In the second 'with' statement, we should change variable name from
  keyf to certf.

  [1]:
  
https://github.com/openstack/glance/blob/d4cf5a015b49b8163a71726781aee54da9c252ec/glance/tests/unit/common/test_utils.py#L308

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1482144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434440] Re: glance db sync failed when using DB2

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1434440

Title:
  glance db sync failed when using DB2

Status in Glance:
  Fix Released

Bug description:
  The DB2 CI on nova project was blocked by glance db sync.

  Software versions:

  SQLAlchemy (0.9.8)
  sqlalchemy-migrate (0.9.5)
  ibm-db (2.0.4.1)
  ibm-db-alembic (0.1.0) with patch 
https://code.google.com/p/ibm-db/source/detail?r=74221d0d47a9aad3b1a28c9a5a5de41880136560=ibm-db-alembic
  ibm-db-sa (0.3.2) with patch 
https://code.google.com/p/ibm-db/source/detail?r=f560ba1e0d8210c1498c3db35489eb471001749a=ibm-db-sa


  2015-03-19 09:06:19.182 | 2015-03-19 17:06:19.182 13546 INFO 
migrate.versioning.api [-] 38 -> 39...
  2015-03-19 09:06:19.335 | 2015-03-19 17:06:19.332 13546 ERROR 
oslo_db.sqlalchemy.exc_filters [-] DBAPIError exception wrapped from 
(ProgrammingError) ibm_db_dbi::ProgrammingError: Statement Execute Failed: 
[IBM][CLI Driver][DB2/LINUXX8664] SQL0669N  A system required index cannot be 
dropped explicitly.  SQLSTATE=42917 SQLCODE=-669 '\nDROP INDEX 
ix_namespaces_namespace' ()
  2015-03-19 09:06:19.335 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):
  2015-03-19 09:06:19.335 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 951, 
in _execute_context
  2015-03-19 09:06:19.335 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters context)
  2015-03-19 09:06:19.335 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/ibm_db_sa-0.3.2-py2.7.egg/ibm_db_sa/ibm_db.py",
 line 106, in do_execute
  2015-03-19 09:06:19.335 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters)
  2015-03-19 09:06:19.335 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/ibm_db_dbi.py", line 1332, in execute
  2015-03-19 09:06:19.336 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters self._execute_helper(parameters)
  2015-03-19 09:06:19.336 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/ibm_db_dbi.py", line 1244, in 
_execute_helper
  2015-03-19 09:06:19.336 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters raise self.messages[len(self.messages) - 1]
  2015-03-19 09:06:19.336 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters ProgrammingError: ibm_db_dbi::ProgrammingError: 
Statement Execute Failed: [IBM][CLI Driver][DB2/LINUXX8664] SQL0669N  A system 
required index cannot be dropped explicitly.  SQLSTATE=42917 SQLCODE=-669
  2015-03-19 09:06:19.336 | 2015-03-19 17:06:19.332 13546 TRACE 
oslo_db.sqlalchemy.exc_filters
  2015-03-19 09:06:19.458 | 2015-03-19 17:06:19.342 13546 CRITICAL glance [-] 
DBError: (ProgrammingError) ibm_db_dbi::ProgrammingError: Statement Execute 
Failed: [IBM][CLI Driver][DB2/LINUXX8664] SQL0669N  A system required index 
cannot be dropped explicitly.  SQLSTATE=42917 SQLCODE=-669 '\nDROP INDEX 
ix_namespaces_namespace' ()
  2015-03-19 09:06:19.459 | 2015-03-19 17:06:19.342 13546 TRACE glance 
Traceback (most recent call last):
  2015-03-19 09:06:19.459 | 2015-03-19 17:06:19.342 13546 TRACE glance   File 
"/usr/local/bin/glance-manage", line 10, in 
  2015-03-19 09:06:19.460 | 2015-03-19 17:06:19.342 13546 TRACE glance 
sys.exit(main())
  2015-03-19 09:06:19.460 | 2015-03-19 17:06:19.342 13546 TRACE glance   File 
"/opt/stack/new/glance/glance/cmd/manage.py", line 303, in main
  2015-03-19 09:06:19.460 | 2015-03-19 17:06:19.342 13546 TRACE glance 
return CONF.command.action_fn()
  2015-03-19 09:06:19.460 | 2015-03-19 17:06:19.342 13546 TRACE glance   File 
"/opt/stack/new/glance/glance/cmd/manage.py", line 171, in sync
  2015-03-19 09:06:19.460 | 2015-03-19 17:06:19.342 13546 TRACE glance 
CONF.command.current_version)
  2015-03-19 09:06:19.461 | 2015-03-19 17:06:19.342 13546 TRACE glance   File 
"/opt/stack/new/glance/glance/cmd/manage.py", line 116, in sync
  2015-03-19 09:06:19.461 | 2015-03-19 17:06:19.342 13546 TRACE glance 
version)
  2015-03-19 09:06:19.461 | 2015-03-19 17:06:19.342 13546 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/migration.py", line 
79, in db_sync
  2015-03-19 09:06:19.461 | 2015-03-19 17:06:19.342 13546 TRACE glance 
return versioning_api.upgrade(engine, repository, version)
  2015-03-19 09:06:19.461 | 2015-03-19 17:06:19.342 13546 TRACE glance   File 
"/usr/local/lib/python2.7/dist-packages/migrate/versioning/api.py", line 186, 
in upgrade
  2015-03-19 09:06:19.462 | 2015-03-19 

[Yahoo-eng-team] [Bug 1411489] Re: ValueError: Tables "task_info, tasks" have non utf8 collation, please make sure all tables are CHARSET=utf8

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1411489

Title:
  ValueError: Tables "task_info,tasks" have non utf8 collation, please
  make sure all tables are CHARSET=utf8

Status in Glance:
  Fix Released

Bug description:
  When upgrading to Juno and running DB migrations I get the following
  error:

  
  glance-manage db version
  34

  glance-manage db sync

  2015-01-16 13:42:08.647 6746 CRITICAL glance [-] ValueError: Tables 
"task_info,tasks" have non utf8 collation, please make sure all tables are 
CHARSET=utf8
  2015-01-16 13:42:08.647 6746 TRACE glance Traceback (most recent call last):
  2015-01-16 13:42:08.647 6746 TRACE glance   File "/usr/bin/glance-manage", 
line 10, in 
  2015-01-16 13:42:08.647 6746 TRACE glance sys.exit(main())
  2015-01-16 13:42:08.647 6746 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 290, in main
  2015-01-16 13:42:08.647 6746 TRACE glance return 
CONF.command.action_fn(*func_args, **func_kwargs)
  2015-01-16 13:42:08.647 6746 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/glance/cmd/manage.py", line 115, in sync
  2015-01-16 13:42:08.647 6746 TRACE glance version)
  2015-01-16 13:42:08.647 6746 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/oslo/db/sqlalchemy/migration.py", line 77, in 
db_sync
  2015-01-16 13:42:08.647 6746 TRACE glance _db_schema_sanity_check(engine)
  2015-01-16 13:42:08.647 6746 TRACE glance   File 
"/usr/lib/python2.7/dist-packages/oslo/db/sqlalchemy/migration.py", line 110, 
in _db_schema_sanity_check
  2015-01-16 13:42:08.647 6746 TRACE glance ) % ','.join(table_names))
  2015-01-16 13:42:08.647 6746 TRACE glance ValueError: Tables 
"task_info,tasks" have non utf8 collation, please make sure all tables are 
CHARSET=utf8
  2015-01-16 13:42:08.647 6746 TRACE glance

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1411489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440773] Re: Remove WritableLogger as eventlet has a real logger interface in 0.17.2

2015-09-04 Thread Thierry Carrez
** Changed in: glance
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440773

Title:
  Remove WritableLogger as eventlet has a real logger interface in
  0.17.2

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.log:
  Fix Released

Bug description:
  Info from Sean on IRC:

  the patch to use a real logger interface in eventlet has been released
  in 0.17.2, which means we should be able to phase out
  https://github.com/openstack/oslo.log/blob/master/oslo_log/loggers.py

  Eventlet PR was:
  https://github.com/eventlet/eventlet/pull/75

  thanks,
  dims

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1440773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487975] Re: Capture operation fails with error 500 "The server has either erred or is incapable of performing the requested operation." when inputted in 4 byte unicode character

2015-09-04 Thread Sandhya
** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1487975

Title:
  Capture operation fails with error 500 "The server has either erred or
  is incapable of performing the requested operation." when inputted in
  4 byte unicode characters

Status in Glance:
  Fix Released

Bug description:
  Glance currently 500's when captured a vm with image name with a
  4-byte utf-8 character in it.

  glance api log:

  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client 
[req-73bb1b4a-0f6c-49ee-93e4-60bbccfda528 
0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 
20a07ca65fc44285bd68287a605c8a53 - - -] Registry client request POST /images 
raised ServerError
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client 
Traceback (most recent call last):
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client   File 
"/usr/lib/python2.7/site-packages/glance/registry/client/v1/client.py", line 
121, in do_request
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client 
**kwargs)
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client   File 
"/usr/lib/python2.7/site-packages/glance/common/client.py", line 71, in wrapped
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client 
return func(self, *args, **kwargs)
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client   File 
"/usr/lib/python2.7/site-packages/glance/common/client.py", line 377, in 
do_request
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client 
headers=copy.deepcopy(headers))
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client   File 
"/usr/lib/python2.7/site-packages/glance/common/client.py", line 88, in wrapped
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client 
return func(self, method, url, body, headers)
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client   File 
"/usr/lib/python2.7/site-packages/glance/common/client.py", line 534, in 
_do_request
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client 
raise exception.ServerError()
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client 
ServerError: [G'\uff34\uff48\uff45 \uff52\uff45\uff51\uff55\uff45\uff53\uff54 
\uff52eturned 500 Internal Server Error.\u0e0f\u0e39\u0130\u0131\uff5c]
  2015-08-18 04:19:22.315 15879 ERROR glance.registry.client.v1.client
  2015-08-18 04:19:22.393 15879 INFO eventlet.wsgi.server 
[req-73bb1b4a-0f6c-49ee-93e4-60bbccfda528 
0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 
20a07ca65fc44285bd68287a605c8a53 - - -] Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/eventlet/wsgi.py", line 454, in 
handle_one_response
  result = self.application(self.environ, start_response)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
  return self.func(req, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/oslo_middleware/base.py", line 56, 
in __call__
  response = req.get_response(self.application)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
  application, catch_exc_info=False)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in 
call_application
  app_iter = application(self.environ, start_response)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
  return self.func(req, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 590, in 
__call__
  response = req.get_response(self.application)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
  application, catch_exc_info=False)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in 
call_application
  app_iter = application(self.environ, start_response)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
  resp = self.call_func(req, *args, **self.kwargs)
File "/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func
  return self.func(req, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/osprofiler/web.py", line 99, in 
__call__
  return request.get_response(self.application)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1296, in send
  application, catch_exc_info=False)
File "/usr/lib/python2.7/site-packages/webob/request.py", line 1260, in 
call_application
  app_iter = 

[Yahoo-eng-team] [Bug 1491930] Re: DevStack fails to spawn VMs in Fedora22

2015-09-04 Thread Daniel Mellado
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1491930

Title:
  DevStack fails to spawn VMs in Fedora22

Status in devstack:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  When trying to spawn an instance on latest DevStack with F22, it fails to do 
so giving a nova trace [1]
  Latest commit 1d0b0d363e "Add/Overwrite default images in IMAGE_URLS and 
detect duplicates"

  * Steps to reproduce
  -

  1) Try to deploy any image

  
  [1] http://paste.openstack.org/show/28/

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1491930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp