[Yahoo-eng-team] [Bug 1912343] [NEW] Nova returns 'The requested availability zone is not available (HTTP 400)' on any cell errors

2021-01-19 Thread Andrey Volkov
Public bug reported:

Description
===

In case of for example DB errors `openstack server create --availability-zone 
...` command returns
"The requested availability zone is not available (HTTP 400)" instead of actual 
50x error.
This considered as client error and e.g. stops Heat to retry requests.

For example:

root@dev01:~# openstack server create --image cirros-0.5.1-x86_64-disk --flavor 
c1 vm5 --net public --availability-zone nova
The requested availability zone is not available (HTTP 400) (Request-ID: 
req-50190228-51ac-4303-ad3e-d8b920bb7ad8)

Logs:
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context [None 
req-50190228-51ac-4303-ad3e-d8b920bb7ad8 admin admin] Error gathering result 
from cell ----: ValueError: Cannot get service 
list (artificial error to help reproduce a bug)
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context 
Traceback (most recent call last):
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context   File 
"/opt/stack/nova/nova/context.py", line 426, in gather_result
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context 
result = fn(*args, **kwargs)
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context   File 
"/usr/local/lib/python3.6/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context 
result = fn(cls, context, *args, **kwargs)
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context   File 
"/opt/stack/nova/nova/objects/service.py", line 635, in get_all
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context 
raise ValueError('Cannot get service list (artificial error to help reproduce a 
bug)')
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context 
ValueError: Cannot get service list (artificial error to help reproduce a bug)
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: DEBUG nova.objects.service 
[None req-50190228-51ac-4303-ad3e-d8b920bb7ad8 admin admin] ! service get_all 
{{(pid=4872) get_all /opt/stack/nova/nova/objects/service.py:632}}
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context [None 
req-50190228-51ac-4303-ad3e-d8b920bb7ad8 admin admin] Error gathering result 
from cell 558d4e82-cdc0-4020-a56b-6835326f58ec: ValueError: Cannot get service 
list (artificial error to help reproduce a bug)
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context 
Traceback (most recent call last):
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context   File 
"/opt/stack/nova/nova/context.py", line 426, in gather_result
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context 
result = fn(*args, **kwargs)
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context   File 
"/usr/local/lib/python3.6/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context 
result = fn(cls, context, *args, **kwargs)
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context   File 
"/opt/stack/nova/nova/objects/service.py", line 635, in get_all
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context 
raise ValueError('Cannot get service list (artificial error to help reproduce a 
bug)')
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context 
ValueError: Cannot get service list (artificial error to help reproduce a bug)
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: ERROR nova.context
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: WARNING nova.compute.api 
[None req-50190228-51ac-4303-ad3e-d8b920bb7ad8 admin admin] Cell 
---- is not responding and hence skipped from 
the results.
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: WARNING nova.compute.api 
[None req-50190228-51ac-4303-ad3e-d8b920bb7ad8 admin admin] Cell 
558d4e82-cdc0-4020-a56b-6835326f58ec is not responding and hence skipped from 
the results.
Jan 19 12:37:32 dev01 devstack@n-api.service[4869]: INFO 
nova.api.openstack.wsgi [None req-50190228-51ac-4303-ad3e-d8b920bb7ad8 admin 
admin] HTTP exception thrown: The requested availability zone is not available

Under the same conditions, `openstack compute service list` returns an
empty success result.

Steps to reproduce
==

To emulate runtime errors something like that can be used:

@@ -629,6 +629,10 @@ class ServiceList(base.ObjectListBase,
base.NovaObject):

 @base.remotable_classmethod
 def get_all(cls, context, disabled=None, set_zones=False):
+import datetime
+if datetime.datetime.now() > datetime.datetime(2021, 1, 19, 12, 7, 0, 
976979):
+raise ValueError('Cannot get service list (artificial error to 
help reproduce a bug)')
 db

[Yahoo-eng-team] [Bug 1875287] [NEW] VM unshelve failed if verify_glance_signatures enabled

2020-04-26 Thread Andrey Volkov
Public bug reported:

Description

If CONF.glance.enable_image_auto_signature = True, then it's required
image to have properties related to signature. `nova shelve` command
creates an image without those properties. Thus, `nova unshelve` fails.

Steps to reproduce

1.  Set

  [glance]
  enable_image_auto_signature = True

and restart Nova compute.

3. nova shelve vm1; nova unshelve vm1

Expected result

vm1 status ACTIVE.

Actual result

vm1 status SHELVED_OFFLOADED and error in log:
ERROR oslo_messaging.rpc.server
cursive.exception.SignatureVerificationError: Signature verification
for the image failed: Required image properties for signature
verification do not exist. Cannot verify signature. Missing property:
img_signature_uuid.

http://ix.io/2jKs

Environment: master

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1875287

Title:
  VM unshelve failed if verify_glance_signatures enabled

Status in OpenStack Compute (nova):
  New

Bug description:
  Description

  If CONF.glance.enable_image_auto_signature = True, then it's required
  image to have properties related to signature. `nova shelve` command
  creates an image without those properties. Thus, `nova unshelve` fails.

  Steps to reproduce

  1.  Set

[glance]
enable_image_auto_signature = True

  and restart Nova compute.

  3. nova shelve vm1; nova unshelve vm1

  Expected result

  vm1 status ACTIVE.

  Actual result

  vm1 status SHELVED_OFFLOADED and error in log:
  ERROR oslo_messaging.rpc.server
  cursive.exception.SignatureVerificationError: Signature verification
  for the image failed: Required image properties for signature
  verification do not exist. Cannot verify signature. Missing property:
  img_signature_uuid.

  http://ix.io/2jKs

  Environment: master

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1875287/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1818687] [NEW] Cannot boot a VM with utf8 name with contrail

2019-03-05 Thread Andrey Volkov
Public bug reported:

This traceback is for Queens release:

2019-02-28 17:38:50.815 4688 ERROR nova.virt.libvirt.driver 
[req-ff7251c9-ffc4-427c-8971-ae3b06ddf3bd f86665bb986e4392976a2f22d9c2d522 
b35422ec2c02435cbe6a606659f595e3 - default default] [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] Failed to start libvirt guest: 
UnicodeEncodeError: 'ascii' codec can't encode character u'\u20a1' in position 
19: ordinal not in range(128)
2019-02-28 17:38:51.264 4688 INFO nova.virt.libvirt.driver 
[req-ff7251c9-ffc4-427c-8971-ae3b06ddf3bd f86665bb986e4392976a2f22d9c2d522 
b35422ec2c02435cbe6a606659f595e3 - default default] [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] Deleting instance files 
/var/lib/nova/instances/8e90550d-3b62-4f70-bd70-b3c135a8a092_del
2019-02-28 17:38:51.265 4688 INFO nova.virt.libvirt.driver 
[req-ff7251c9-ffc4-427c-8971-ae3b06ddf3bd f86665bb986e4392976a2f22d9c2d522 
b35422ec2c02435cbe6a606659f595e3 - default default] [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] Deletion of 
/var/lib/nova/instances/8e90550d-3b62-4f70-bd70-b3c135a8a092_del complete
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager 
[req-ff7251c9-ffc4-427c-8971-ae3b06ddf3bd f86665bb986e4392976a2f22d9c2d522 
b35422ec2c02435cbe6a606659f595e3 - default default] [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] Instance failed to spawn: 
UnicodeEncodeError: 'ascii' codec can't encode character u'\u20a1' in position 
19: ordinal not in range(128)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] Traceback (most recent call last):
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2252, in 
_build_resources
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] yield resources
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2032, in 
_build_and_run_instance
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] block_device_info=block_device_info)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3107, in 
spawn
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] destroy_disks_on_failure=True)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5627, in 
_create_domain_and_network
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] destroy_disks_on_failure)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] self.force_reraise()
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] six.reraise(self.type_, self.value, 
self.tb)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 5586, in 
_create_domain_and_network
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] self.plug_vifs(instance, network_info)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 836, in 
plug_vifs
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] self.vif_driver.plug(instance, vif)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 805, in plug
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092] func(instance, vif)
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3b62-4f70-bd70-b3c135a8a092]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 762, in 
plug_vrouter
2019-02-28 17:38:51.520 4688 ERROR nova.compute.manager [instance: 
8e90550d-3

[Yahoo-eng-team] [Bug 1808286] [NEW] Inconsistent behavior for the marker option for instances and build requests

2018-12-12 Thread Andrey Volkov
Public bug reported:

When --marker is used for instances it skips instance with marker
http://ix.io/1vU9.

For build request instances --marker option includes instance with
marker http://ix.io/1vUa.

It's hard to catch moment with build request available and I used the
following sql to emulate build request presence http://ix.io/1vUb

** Affects: nova
 Importance: Low
 Status: New

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1808286

Title:
  Inconsistent behavior for the marker option for instances and build
  requests

Status in OpenStack Compute (nova):
  New

Bug description:
  When --marker is used for instances it skips instance with marker
  http://ix.io/1vU9.

  For build request instances --marker option includes instance with
  marker http://ix.io/1vUa.

  It's hard to catch moment with build request available and I used the
  following sql to emulate build request presence http://ix.io/1vUb

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1808286/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653810] Re: [sriov] Modifying or removing pci_passthrough_whitelist may result in inconsistent VF availability

2018-07-18 Thread Andrey Volkov
** Changed in: nova
   Status: In Progress => Triaged

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653810

Title:
  [sriov] Modifying or removing pci_passthrough_whitelist may result in
  inconsistent VF availability

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  OpenStack Version: v14 (Newton)
  NIC: Mellanox ConnectX-3 Pro

  While testing an SR-IOV implementation, we found that
  pci_passthrough_whitelist in nova.conf is involved in the population
  of the pci_devices table in the Nova DB. Making changes to the
  device/interface in the whitelist or commenting out the line
  altogether, and restarting nova-compute, can result in the entries
  being marked as 'deleted' in the database. Reconfiguring the
  pci_passthrough_whitelist option with the same device/interface will
  result in new entries being created and marked as 'available'. This
  can cause PCI device claim issues if an existing instance is still
  running and using a VF and another instance is booted using a 'direct'
  port.

  In the following table, you can see the original implementation that
  includes an allocated VF. During testing, we commented out the
  pci_passthrough_whitelist line in nova.conf, and restarted nova-
  compute. The entries were marked as 'deleted', though the running
  instance was not deleted and continued to function.  The
  pci_passthrough_whitelist config was then returned and nova-compute
  restarted. New entries were created and marked as 'available':

  MariaDB [nova]> select * from pci_devices;
  
+-+-+-+-+-+-+--++---+--+--+-+-++--+--+---+--+
  | created_at  | updated_at  | deleted_at  | deleted | 
id  | compute_node_id | address  | product_id | vendor_id | dev_type | 
dev_id   | label   | status  | extra_info | instance_uuid   
 | request_id   | numa_node | 
parent_addr  |
  
+-+-+-+-+-+-+--++---+--+--+-+-++--+--+---+--+
  | 2016-12-29 15:23:36 | 2016-12-29 20:40:34 | 2016-12-29 20:42:26 |  72 | 
 72 |   6 | :07:00.0 | 1007   | 15b3  | type-PF  | 
pci__07_00_0 | label_15b3_1007 | unavailable | {} | NULL
 | NULL | 0 | NULL  
   |
  | 2016-12-29 15:23:36 | 2016-12-29 20:40:34 | 2016-12-29 20:43:23 |  75 | 
 75 |   6 | :07:00.1 | 1004   | 15b3  | type-VF  | 
pci__07_00_1 | label_15b3_1004 | available   | {} | NULL
 | NULL | 0 | 
:07:00.0 |
  | 2016-12-29 15:23:36 | 2016-12-29 20:40:34 | 2016-12-29 20:42:26 |  78 | 
 78 |   6 | :07:00.2 | 1004   | 15b3  | type-VF  | 
pci__07_00_2 | label_15b3_1004 | available   | {} | NULL
 | NULL | 0 | 
:07:00.0 |
  | 2016-12-29 15:23:36 | 2016-12-29 20:40:34 | 2016-12-29 20:44:25 |  81 | 
 81 |   6 | :07:00.3 | 1004   | 15b3  | type-VF  | 
pci__07_00_3 | label_15b3_1004 | available   | {} | NULL
 | NULL | 0 | 
:07:00.0 |
  | 2016-12-29 15:23:36 | 2016-12-29 20:40:34 | 2016-12-29 20:42:26 |  84 | 
 84 |   6 | :07:00.4 | 1004   | 15b3  | type-VF  | 
pci__07_00_4 | label_15b3_1004 | available   | {} | NULL
 | NULL | 0 | 
:07:00.0 |
  | 2016-12-29 15:23:36 | 2016-12-29 20:40:34 | 2016-12-29 20:43:23 |  87 | 
 87 |   6 | :07:00.5 | 1004   | 15b3  | type-VF  | 
pci__07_00_5 | label_15b3_1004 | available   | {} | NULL
 | NULL | 0 | 
:07:00.0 |
  | 2016-12-29 15:23:36 | 2016-12-29 20:40:34 | 2016-12-29 20:42:26 |  90 | 
 90 |   6 | :07:00.6 | 1004   | 15b3  | type-VF  | 
pci__07_00_6 | label_15b3_1004 | available   | {} | NULL
 | NULL | 0 | 
:07:00.0 

[Yahoo-eng-team] [Bug 1777088] Re: controller fails NUMA topology requirements. The instance does not fit on this host. host_passes

2018-07-18 Thread Andrey Volkov
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777088

Title:
  controller fails NUMA topology requirements. The instance does not fit
  on this host. host_passes

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  openstack queens:

  
  Turn on NUMA scheduling:
  vi /etc/nova/nova.conf
  enabled_filters =,NUMATopologyFilter

  

  (openstack) flavor show p1
  
++--+
  | Field  | Value  
  |
  
++--+
  | OS-FLV-DISABLED:disabled   | False  
  |
  | OS-FLV-EXT-DATA:ephemeral  | 0  
  |
  | access_project_ids | None   
  |
  | disk   | 10 
  |
  | id | ab9f4851-c4a0-48e4-affe-e780ad8a87a1   
  |
  | name   | p1 
  |
  | os-flavor-access:is_public | True   
  |
  | properties | hw:mem_page_size='1024', hw:numa_cpus.1='20', 
hw:numa_mem.1='512', hw:numa_nodes='1' |
  | ram| 512
  |
  | rxtx_factor| 1.0
  |
  | swap   |
  |
  | vcpus  | 1  
  |
  
++--+

  
  [root@controller ~]# numactl --hardware
  available: 2 nodes (0-1)
  node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
  node 0 size: 130669 MB
  node 0 free: 116115 MB
  node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
  node 1 size: 131072 MB
  node 1 free: 114675 MB
  node distances:
  node   0   1 
0:  10  21 
1:  21  10 


  
  Error log
  tail -f /var/log/nova/nova-conductor.log ::

   default default] Failed to compute_task_build_instances: No valid host was 
found. There are not enough hosts available.
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
226, in inner
  return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 
154, in select_destinations
  allocation_request_version, return_alternates)
File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 91, in select_destinations
  allocation_request_version, return_alternates)
File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 243, in _schedule
  claimed_instance_uuids)
File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", 
line 280, in _ensure_sufficient_hosts
  raise exception.NoValidHost(reason=reason)
  NoValidHost: No valid host was found. There are not enough hosts available.
  : NoValidHost_Remote: No valid host was found. There are not enough hosts 
available.


  Error log
  tail -f /var/log/nova/nova-scheduler.log::

  2018-06-15 16:52:33.457 5829 DEBUG nova.virt.hardware 
[req-be251765-6c3b-46aa-ae05-6c2e12ae8661 7e909565a4b847fe81cd6d1cf778c893 
b2760ba26e5645bf9856669d560d91c7 - default default] Attempting to fit instance 
cell 
InstanceNUMACell(cpu_pinning_raw=None,cpu_policy=None,cpu_thread_policy=None,cpu_topology=,cpuset=set([0]),cpuset_reserved=None,id=0,memory=512,pagesize=1024)
 on host_cell 
NUMACell(cpu_usage=0,cpuset=set([8,9,10,11,12,13,14,15,24,25,26,27,28,29,30,31]),id=1,memory=131072,memory_usage=0,mempages=[NUMAPagesTopology,NUMAPagesTopology],pinned_cpus=set([]),siblings=[set([8,24]),set([14,30]),set([15,31]),set([11,27]),set([10,26]),set([12,28]),set([9,25]),set([13,29])])
 _numa_fit_instance_cell 
/usr/lib/python2.7/site-packages/nova/virt/hardware.py:974
  2018-06-15 16:52:33.458 5829 DEBUG nova.virt.hardware 
[req-be251765-6c3b-46aa-ae05-6c2e12ae8661 7e909565a4b847fe81cd6d1cf778c893 
b2760ba26e5645bf9856669d560d91c7

[Yahoo-eng-team] [Bug 1771773] [NEW] Ssl2/3 should not be used for secure VNC access

2018-05-17 Thread Andrey Volkov
Public bug reported:

This report is based on Bandit scanner results.

On
https://git.openstack.org/cgit/openstack/nova/tree/nova/console/rfb/authvencrypt.py?h=refs/heads/master#n137

137 wrapped_sock = ssl.wrap_socket(

wrap_socket is used without ssl_version that means SSLv23 by default.
As server part (QEMU) is based on gnutls supporting all modern TLS versions
it is possible to use stricter tls version on the client (TLSv1.2).
Another option is to make this param configurable.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771773

Title:
  Ssl2/3 should not be used for secure VNC access

Status in OpenStack Compute (nova):
  New

Bug description:
  This report is based on Bandit scanner results.

  On
  
https://git.openstack.org/cgit/openstack/nova/tree/nova/console/rfb/authvencrypt.py?h=refs/heads/master#n137

  137 wrapped_sock = ssl.wrap_socket(

  wrap_socket is used without ssl_version that means SSLv23 by default.
  As server part (QEMU) is based on gnutls supporting all modern TLS versions
  it is possible to use stricter tls version on the client (TLSv1.2).
  Another option is to make this param configurable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771538] [NEW] PowerVM config drive path is not secure

2018-05-16 Thread Andrey Volkov
Public bug reported:

This report is based on the Bandit scanner results and code review.

1) 
On 
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/powervm/media.py?h=refs/heads/master#n44

43 _VOPT_SIZE_GB = 1
44 _VOPT_TMPDIR = '/tmp/cfgdrv/'
45

We have hardcoded tmp dir that could be cleaned up after compute node reboot.
As mentioned in todo it might be good to use conf option.

2) 
On 
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/powervm/media.py?h=refs/heads/master#n116
Predictable file name based on a user input is used:
116file_name = pvm_util.sanitize_file_name_for_api(
117instance.name, prefix='cfg_', suffix='.iso',
118max_len=pvm_const.MaxLen.VOPT_NAME)
Probably we could use instance.uuid for that.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771538

Title:
  PowerVM config drive path is not secure

Status in OpenStack Compute (nova):
  New

Bug description:
  This report is based on the Bandit scanner results and code review.

  1) 
  On 
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/powervm/media.py?h=refs/heads/master#n44

  43 _VOPT_SIZE_GB = 1
  44 _VOPT_TMPDIR = '/tmp/cfgdrv/'
  45

  We have hardcoded tmp dir that could be cleaned up after compute node reboot.
  As mentioned in todo it might be good to use conf option.

  2) 
  On 
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/powervm/media.py?h=refs/heads/master#n116
  Predictable file name based on a user input is used:
  116file_name = pvm_util.sanitize_file_name_for_api(
  117instance.name, prefix='cfg_', suffix='.iso',
  118max_len=pvm_const.MaxLen.VOPT_NAME)
  Probably we could use instance.uuid for that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771538/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1727719] [NEW] Placement: resource provider can't be deleted if there is trait associated with it

2017-10-26 Thread Andrey Volkov
Public bug reported:

nova: 6d61c61a32

To reproduce:
1. Create a new RP
2. Associate a trait with it.
3. Try to delete the RP.

Error:
Response: 
{"computeFault": {"message": "The server has either erred or is incapable of 
performing the requested operation.", "code": 500}}

(Pdb) __exception__
(, 
DBReferenceError(u"(pymysql.err.IntegrityError) (1451, u'Cannot delete or 
update a parent row: a foreign key constraint fails 
(`nova_api`.`resource_provider_traits`, CONSTRAINT 
`resource_provider_traits_ibfk_1` FOREIGN KEY (`resource_provider_id`) 
REFERENCES `resource_providers` (`id`))') [SQL: u'DELETE FROM 
resource_providers WHERE resource_providers.id = %(id_1)s'] [parameters: 
{u'id_1': 17}]",))

** Affects: nova
 Importance: Low
 Status: New


** Tags: low-hanging-fruit placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1727719

Title:
  Placement: resource provider can't be deleted if there is trait
  associated with it

Status in OpenStack Compute (nova):
  New

Bug description:
  nova: 6d61c61a32

  To reproduce:
  1. Create a new RP
  2. Associate a trait with it.
  3. Try to delete the RP.

  Error:
  Response: 
  {"computeFault": {"message": "The server has either erred or is incapable of 
performing the requested operation.", "code": 500}}

  (Pdb) __exception__
  (, 
DBReferenceError(u"(pymysql.err.IntegrityError) (1451, u'Cannot delete or 
update a parent row: a foreign key constraint fails 
(`nova_api`.`resource_provider_traits`, CONSTRAINT 
`resource_provider_traits_ibfk_1` FOREIGN KEY (`resource_provider_id`) 
REFERENCES `resource_providers` (`id`))') [SQL: u'DELETE FROM 
resource_providers WHERE resource_providers.id = %(id_1)s'] [parameters: 
{u'id_1': 17}]",))

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1727719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1680660] Re: empty string server tag can be created

2017-04-07 Thread Andrey Volkov
Thanks for the report.

What do you mean by empty string? My tests show the following:
put tags [] => no tags created
put tags [''] => u'' is too short
put tags ['   '] => ok

Looks like it corresponds with docs. http://ix.io/pTo

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1680660

Title:
  empty string server tag can be created

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  In compute api description "Server tags (servers, tags)¶
  Tag is a non-empty string."

  But actually we can create empty string server tags via api Replace Tags:
   PUT  /servers/{server_id}/tags

  
  Steps to reproduce
  ==
  PUT  /servers/{server_id}/tags with empty string

  Expected result
  ===
  Fail to create, or we need to modify the api doc

  Actual result
  =
  Update successfully, but api doc says it it not allowed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1680660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673683] Re: PlacementNotConfigured: This compute is not configured to talk to the placement service. Configure the [placement] section of nova.conf and restart the service.

2017-03-17 Thread Andrey Volkov
Yes, it could be something like http://ix.io/p3p.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1673683

Title:
  PlacementNotConfigured: This compute is not configured to talk to the
  placement service. Configure the [placement] section of nova.conf and
  restart the service.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I install and configure a compute node,folowed: 
  https://docs.openstack.org/ocata/install-guide-rdo/nova-compute-install.html

  after 'systemctl start libvirtd.service openstack-nova-compute.service'
  'openstack-nova-compute' is dead.
  find an error in '/var/log/nova/nova-compute.log':

  2017-03-17 14:23:17.881 19364 ERROR oslo_service.service [-] Error starting 
thread.
  2017-03-17 14:23:17.881 19364 ERROR oslo_service.service Traceback (most 
recent call last):
  2017-03-17 14:23:17.881 19364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/oslo_service/service.py", line 722, in 
run_service
  2017-03-17 14:23:17.881 19364 ERROR oslo_service.service service.start()
  2017-03-17 14:23:17.881 19364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 144, in start
  2017-03-17 14:23:17.881 19364 ERROR oslo_service.service 
self.manager.init_host()
  2017-03-17 14:23:17.881 19364 ERROR oslo_service.service   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1136, in 
init_host
  2017-03-17 14:23:17.881 19364 ERROR oslo_service.service raise 
exception.PlacementNotConfigured()
  2017-03-17 14:23:17.881 19364 ERROR oslo_service.service 
PlacementNotConfigured: This compute is not configured to talk to the placement 
service. Configure the [placement] section of nova.conf and restart the service.

  How to configure "CONF.placement.os_region_name" ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1673683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1672041] Re: nova.scheduler.client.report 409 Conflict

2017-03-13 Thread Andrey Volkov
>From description you provided it doesn't look like an error.

> Unable to create allocation for 'MEMORY_MB' on resource provider
'610ee875-405e-4738-8099-5a218fa4986f'. The requested amount would
violate inventory constraints.

means that on compute node you don't have enough memory resource.
You could look either at DB state (http://ix.io/oXq) or at placement API 
(http://ix.io/oXr)

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1672041

Title:
  nova.scheduler.client.report  409 Conflict

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  With stable/newton and placement-api an instance fails to start.

  Compute logs shows this error:

  2017-03-11 04:20:58.830 7 DEBUG nova.scheduler.client.report 
[req-330e1b63-286f-47f4-b8ca-49365d9509b5 - - - - -] [instance: 
4a33b8fc-b9a4-49ea-8afc-4cb15bd3f841] Sending allocation for instance 
{'allocations': [{'resource_provider': {'uuid': 
'610ee875-405e-4738-8099-5a218fa4986f'}, 'resources': {'MEMORY_MB': 512, 
'VCPU': 1, 'DISK_GB': 1}}]} _allocate_for_instance 
/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/scheduler/client/report.py:397
  2017-03-11 04:20:58.863 7 WARNING nova.scheduler.client.report 
[req-330e1b63-286f-47f4-b8ca-49365d9509b5 - - - - -] Unable to submit 
allocation for instance 4a33b8fc-b9a4-49ea-8afc-4cb15bd3f841 (409 409 Conflict

  There was a conflict when trying to complete your request.

   Unable to allocate inventory: Unable to create allocation for
  'MEMORY_MB' on resource provider
  '610ee875-405e-4738-8099-5a218fa4986f'. The requested amount would
  violate inventory constraints.  )

  Corresponding to this event placement-api log show this:

  2017-03-11 04:20:58.857 15 WARNING nova.objects.resource_provider 
[req-a56a49ea-eda8-4e37-b87e-cd26e9e1 - - - - -] Allocation for MEMORY_MB 
on resource provider 610ee875-405e-4738-8099-5a218fa4986f violates min_unit, 
max_unit, or step_size. Requested: 512, min_unit: 1, max_unit: 1, step_size: 1
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation 
[req-a56a49ea-eda8-4e37-b87e-cd26e9e1 - - - - -] Bad inventory
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation Traceback (most recent call 
last):
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/api/openstack/placement/handlers/allocation.py",
 line 253, in set_allocations
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation allocations.create_all()
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/objects/resource_provider.py",
 line 1184, in create_all
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation 
self._set_allocations(self._context, self.objects)
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 894, in wrapper
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation return fn(*args, **kwargs)
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/objects/resource_provider.py",
 line 1146, in _set_allocations
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation before_gens = 
_check_capacity_exceeded(conn, allocs)
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/objects/resource_provider.py",
 line 1058, in _check_capacity_exceeded
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation resource_provider=rp_uuid)
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation 
InvalidAllocationConstraintsViolated: Unable to create allocation for 
'MEMORY_MB' on resource provider '610ee875-405e-4738-8099-5a218fa4986f'. The 
requested amount would violate inventory constraints.
  2017-03-11 04:20:58.858 15 ERROR 
nova.api.openstack.placement.handlers.allocation

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1672041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671373] Re: 'Invalid credentials with the provider'

2017-03-10 Thread Andrey Volkov
I believe it could be credentials indeed or configuration problem, not nova.
Please, see example: http://ix.io/oRj

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1671373

Title:
  'Invalid credentials with the provider'

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Hi,

  I have some issues with libcloud library when trying to retrieve the
  openstack instance details. Please let me know if you have any idea on
  the below error. This error is not seen in kilo setup.

  --
  >>> driver.list_nodes()
  Traceback (most recent call last):
  File "/home/s.viswanathan/GV/lib/cloudlib.py", line 94, in show_instances
  ERROR 2017-03-07 04:04:20 nodes = sess.list_nodes()
  ERROR 2017-03-07 04:04:20   File 
"/usr/local/lib/python2.7/dist-packages/libcloud/compute/drivers/openstack.py", 
line 177, in list_nodes
  ERROR 2017-03-07 04:04:20 self.connection.request('/servers/detail', 
params=params).object)
  ERROR 2017-03-07 04:04:20   File 
"/usr/local/lib/python2.7/dist-packages/libcloud/common/openstack.py", line 
227, in request
  ERROR 2017-03-07 04:04:20 raw=raw)
  ERROR 2017-03-07 04:04:20   File 
"/usr/local/lib/python2.7/dist-packages/libcloud/common/base.py", line 757, in 
request
  ERROR 2017-03-07 04:04:20 action = self.morph_action_hook(action)
  ERROR 2017-03-07 04:04:20   File 
"/usr/local/lib/python2.7/dist-packages/libcloud/common/openstack.py", line 
294, in morph_action_hook
  ERROR 2017-03-07 04:04:20 self._populate_hosts_and_request_paths()
  ERROR 2017-03-07 04:04:20   File 
"/usr/local/lib/python2.7/dist-packages/libcloud/common/openstack.py", line 
327, in _populate_hosts_and_request_paths
  ERROR 2017-03-07 04:04:20 osa = osa.authenticate(**kwargs)  # may throw 
InvalidCreds
  ERROR 2017-03-07 04:04:20   File 
"/usr/local/lib/python2.7/dist-packages/libcloud/common/openstack_identity.py", 
line 855, in authenticate
  ERROR 2017-03-07 04:04:20 return self._authenticate_2_0_with_password()
  ERROR 2017-03-07 04:04:20   File 
"/usr/local/lib/python2.7/dist-packages/libcloud/common/openstack_identity.py", 
line 880, in _authenticate_2_0_with_password
  ERROR 2017-03-07 04:04:20 return self._authenticate_2_0_with_body(reqbody)
  ERROR 2017-03-07 04:04:20   File 
"/usr/local/lib/python2.7/dist-packages/libcloud/common/openstack_identity.py", 
line 888, in _authenticate_2_0_with_body
  ERROR 2017-03-07 04:04:20 raise InvalidCredsError()
  ERROR 2017-03-07 04:04:20 InvalidCredsError: 'Invalid credentials with the 
provider' 
  --

  
  I have tried from python interpreter manually also, seen same problem.

  from libcloud.compute.types import Provider
  from libcloud.compute.providers import get_driver
  import libcloud.security
  OpenStack = get_driver(Provider.OPENSTACK)
  driver = OpenStack('sathya', 
'sathya',ex_force_auth_url='http://x.x.x.x:5000',ex_force_auth_version='2.0_password')
  driver.list_nodes()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1671373/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671434] Re: fdatasync() usage breaks Windows compatibility

2017-03-09 Thread Andrey Volkov
*** This bug is a duplicate of bug 1671435 ***
https://bugs.launchpad.net/bugs/1671435

Yes, fdatasync is available only for unix.
https://docs.python.org/2/library/os.html#os.fdatasync

** Changed in: nova
   Status: New => Confirmed

** This bug has been marked a duplicate of bug 1671435
   fdatasync() usage breaks Windows compatibility

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1671434

Title:
  fdatasync() usage breaks Windows compatibility

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The following change uses fdatasync when fetching Glance images, which
  is not supported on Windows: Id9905a87f16f66530623800e33e2581c555ae81d

  For this reason, this operation is now failing on Windows.
  Trace: http://paste.openstack.org/raw/602054/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1671434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543791] Re: Nova API doc for listing servers is missing metadata filter

2017-03-03 Thread Andrey Volkov
Neither metadata nor system_metadata is not supported now. 
For example response: {"badRequest": {"message": "Invalid filter field: 
system_metadata.", "code": 400}} is due system_metadata not in allowed search 
options.

fixed_ip replaced with ip now, it's supported but for docs ip is
preferable.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543791

Title:
  Nova API doc for listing servers is missing metadata filter

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  I am using Nova v2.1 (Liberty). The documentation for listing servers
  only covers a portion of the supported filter options. I don't know
  what they all should be but I've found that 'metadata' is one of them.
  :-)

  List Servers Doc: http://developer.openstack.org/api-ref-
  compute-v2.1.html#listServers

  Here is the filter method for listing servers where you can see the supported 
filters:
  
https://github.com/openstack/nova/blob/098d4ad487f8431b82d776629f15d13142d42789/nova/compute/api.py#L2027

  The query below searches for servers with the metadata "foo=bar", and
  is URL encoded:

  curl -X "GET"
  
"http://api.openstacknetsdk.org:8774/v2.1/cae3d055dc5e4828adc4fdfe341168f7/servers/detail?metadata=%7B%22foo%22:%22bar%22%7D";

  Without the encoding the query looks like this:

  servers/detail?metadata={"foo":"bar"}

  I haven't tried any other filters found in that method but they may be
  ip (fixed_ip) and system_metadata.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543791/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1616769] Re: Cant revert from neutron-port PF to VF because sriov_numvfs parameter get "0" value

2017-03-02 Thread Andrey Volkov
** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1616769

Title:
  Cant revert from neutron-port  PF to VF because sriov_numvfs parameter
  get "0" value

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description of problem:

  When manage SR-IOV PFs as Neutron ports I can see that
  /sys/class/net/enp5s0f1/device/sriov_numvfs parameter gets "0" value.
  When I delete the PF port  so I can switch to SRIOV - direct port (VF) I 
can't boot vm because sriov_numvfs parameter equal to "0" value

  Version-Release number of selected component (if applicable):

  $ rpm -qa |grep neutron
  python-neutron-lib-0.3.0-0.20160803002107.405f896.el7ost.noarch
  openstack-neutron-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  puppet-neutron-9.1.0-0.20160813031056.7cf5e07.el7ost.noarch
  python-neutron-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  openstack-neutron-lbaas-9.0.0-0.20160816191643.4e7301e.el7ost.noarch
  python-neutron-fwaas-9.0.0-0.20160817171450.e1ac68f.el7ost.noarch
  python-neutron-lbaas-9.0.0-0.20160816191643.4e7301e.el7ost.noarch
  openstack-neutron-ml2-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  openstack-neutron-metering-agent-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  openstack-neutron-openvswitch-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  python-neutronclient-5.0.0-0.20160812094704.ec20f7f.el7ost.noarch
  openstack-neutron-common-9.0.0-0.20160817153328.b9169e3.el7ost.noarch
  openstack-neutron-fwaas-9.0.0-0.20160817171450.e1ac68f.el7ost.noarch

  $ rpm -qa |grep nova
  python-novaclient-5.0.1-0.20160724130722.6b11a1c.el7ost.noarch
  openstack-nova-api-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  puppet-nova-9.1.0-0.20160813014843.b94f0a0.el7ost.noarch
  openstack-nova-common-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  openstack-nova-novncproxy-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  openstack-nova-conductor-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  python-nova-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  openstack-nova-scheduler-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  openstack-nova-cert-14.0.0-0.20160817225441.04cef3b.el7ost.noarch
  openstack-nova-console-14.0.0-0.20160817225441.04cef3b.el7ost.noarch

  How reproducible:

  Always

  Steps to Reproduce:

  1.Set SRIOV ENV and PF support : 
https://docs.google.com/document/d/1qQbJlLI1hSlE4uwKpmVd0BoGSDBd8Z0lTzx5itQ6WL0/edit#
  2. BOOT VM that assign to PF (neutron port- direct-physical) -  should boot 
well
  3. check cat /sys/class/net/enp5s0f1/device/sriov_numvfs (= 0)
  4. delete vm  and check again sriov_numvfs (=0)
  5. I expect that numvfs should return to the default value that was configured

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1616769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1637390] Re: nova.consoleauth.manager often denies access to VNC: First login fails or token expires too fast

2017-03-02 Thread Andrey Volkov
First, thanks for good description.

I believe that your case is valid, for example for solving the same
problem with mysql 'select 1' is issued before each request. I'm not
sure it can be treated like a bug, would rather mark this like
"wishlist" behavior.

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1637390

Title:
  nova.consoleauth.manager often denies access to VNC: First login fails
  or token expires too fast

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  I am confronted with a really strange problem with nova-consoleauth. I
  am running OpenStack Newton on Ubuntu Server 16.04.1. When I use VNC
  from Horizon, I get frequently get the error "Failed to connect to
  server (code: 1006)". There is only a single nova-consoleauth service
  available.

  There are two scenarios:
  - The first login with a fresh token fails. The next one succeeds.
  - The first login succeeds, but the token expires really fast.

  Scenario 1 (nova-consoleauth.log):
  2016-10-28 06:50:55.845 9973 INFO nova.consoleauth.manager 
[req-f0f8fc1d-ae41-443d-9647-83287114bf1d 58963f571cad45b3b7b6272c73f4cb3b 
638770a11625458299c2d205759d09df - - -] Received Token: 
37d8bbfb-03b8-4368-b339-b3791e77a4b7, {'instance_uuid': 
u'f35b3673-e2ac-4bfa-878c-6700efd289d5', 'access_url': 
u'https://10.30.216.100:6080/vnc_auto.html?token=37d8bbfb-03b8-4368-b339-b3791e77a4b7',
 'token': u'37d8bbfb-03b8-4368-b339-b3791e77a4b7', 'last_activity_at': 
1477630255.842381, 'internal_access_path': None, 'console_type': u'novnc', 
'host': u'10.30.200.113', 'port': u'5900'}
  2016-10-28 06:50:56.313 9973 INFO nova.consoleauth.manager 
[req-1d623f93-5e05-462a-8058-4867bca71665 - - - - -] Checking Token: 
37d8bbfb-03b8-4368-b339-b3791e77a4b7, False
  2016-10-28 06:51:22.427 9973 INFO nova.consoleauth.manager 
[req-805a354e-c325-4f67-8a64-9e6b1a689f18 - - - - -] Checking Token: 
37d8bbfb-03b8-4368-b339-b3791e77a4b7, False
  2016-10-28 06:51:48.809 9973 INFO nova.consoleauth.manager 
[req-048a2bf7-ac53-4136-b28a-d3c5903ef226 58963f571cad45b3b7b6272c73f4cb3b 
638770a11625458299c2d205759d09df - - -] Received Token: 
cdf07104-102c-44c6-ba90-56049702e3ae, {'instance_uuid': 
u'8c793085-1f79-458f-92a8-ee95add830da', 'access_url': 
u'https://10.30.216.100:6080/vnc_auto.html?token=cdf07104-102c-44c6-ba90-56049702e3ae',
 'token': u'cdf07104-102c-44c6-ba90-56049702e3ae', 'last_activity_at': 
1477630068.805975, 'internal_access_path': None, 'console_type': u'novnc', 
'host': u'10.30.200.111', 'port': u'5900'}
  2016-10-28 06:52:49.168 9973 INFO nova.consoleauth.manager 
[req-81b8c139-303f-4c6e-9b2d-0f7e0ea1467c - - - - -] Checking Token: 
cdf07104-102c-44c6-ba90-56049702e3ae, True
  2016-10-28 06:53:02.168 9973 INFO nova.consoleauth.manager 
[req-81b8c139-303f-4c6e-9b2d-0f7e0ea1467c - - - - -] Checking Token: 
cdf07104-102c-44c6-ba90-56049702e3ae, True

  Scenario 2 (nova-consoleauth.log):
  2016-10-28 07:11:00.059 9973 INFO nova.consoleauth.manager 
[req-c3cfaf64-935f-4b2e-83f1-6bff35f4e923 ba6f9eddfd154b88b6a45d218fb5b310 
638770a11625458299c2d205759d09df - - -] Received Token: 
bc3c697d-8740-4053-adf9-8133ce5f2296, {'instance_uuid': 
u'8c793085-1f79-458f-92a8-ee95add830da', 'access_url': 
u'https://10.30.216.100:6080/vnc_auto.html?token=bc3c697d-8740-4053-adf9-8133ce5f2296',
 'token': u'bc3c697d-8740-4053-adf9-8133ce5f2296', 'last_activity_at': 
1477631460.049053, 'internal_access_path': None, 'console_type': u'novnc', 
'host': u'10.30.200.111', 'port': u'5900'}
  2016-10-28 07:11:00.494 9973 INFO nova.consoleauth.manager 
[req-34d85dce-54e9-475d-9968-524bedffaa0b - - - - -] Checking Token: 
bc3c697d-8740-4053-adf9-8133ce5f2296, True
  2016-10-28 07:11:07.479 9973 INFO nova.consoleauth.manager 
[req-c835b16c-4bb4-4d9d-83c8-e59c174052a6 - - - - -] Checking Token: 
bc3c697d-8740-4053-adf9-8133ce5f2296, True
  2016-10-28 07:12:24.923 9973 INFO nova.consoleauth.manager 
[req-f1748791-e631-429d-b75e-7b865664a09b - - - - -] Checking Token: 
bc3c697d-8740-4053-adf9-8133ce5f2296, False

  I successfully locked in at "07:11:00.059" with the token
  "bc3c697d-8740-4053-adf9-8133ce5f229" and made some refreshes. At
  "07:12:24.923" the token is suddenly invalid. This is really fast...
  What is the reason.

  This is my nova.conf on the controller:
  [DEFAULT]
  auth_strategy = keystone
  debug = true
  enabled_apis = osapi_compute,metadata
  firewall_driver = nova.virt.firewall.NoopFirewallDriver
  host = os-controller01
  linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
  log_dir = /var/log/nova
  memcached_servers = os-memcache:11211
  metadata_listen = $my_ip
  metadata_listen_port = 8775
  my_ip = 10.30.200.101
  osapi_compute_listen = $my_ip
  osapi_compute_listen_port = 8774
  state_path = /var/lib/nova
  transport_url = 
rabbit://nova:XYZ@os-rabbit0

[Yahoo-eng-team] [Bug 1633535] Re: Cinder fails to attach second volume to Nova VM

2017-03-02 Thread Andrey Volkov
Can't reproduce behavior. See http://ix.io/nIs for details.

** Changed in: nova
   Status: New => Invalid

** Changed in: nova
 Assignee: aishwarya (bkaishwarya) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633535

Title:
  Cinder fails to attach second volume to Nova VM

Status in Cinder:
  In Progress
Status in ec2-api:
  Fix Released
Status in Manila:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Fix Released

Bug description:
  Cinder fails to attach second volume to Nova VM. This second volume gets 
"in-use" status, but does not have any attachments. Also,  such volume cannot 
be detached from VM [4].  Test gerrit change [2] proves that commit to Cinder 
[3] is THE CAUSE of a bug.
  Also, bug was reproduced even before merge of [3] with 
"gate-rally-dsvm-cinder" CI job [4], but, I assume, no one has paid attention 
to this.

  Local testing shows that IF bug appears then volume never gets
  attached and list of attachments stays empty. And waiting between
  'create' (wait until 'available' status) and 'attach' commands does
  not help at all.

  How to reproduce:
  1) Create VM
  2) Create Volume
  3) Attach volume (2) to the VM (1)
  4) Create second volume
  5) Try attach second volume (4) to VM (1) - it will fail.

  [Tempest] Also, the fact that Cinder gates passed with [3] means that
  tempest does not have test that attaches more than one volume to one
  Nova VM. And it is also tempest bug, that should be addressed.

  [Manila] In scope of Manila project, one of its drivers is broken -
  Generic driver that uses Cinder as backend.

  [1] http://logs.openstack.org/64/386364/1/check/gate-manila-tempest-
  dsvm-postgres-generic-singlebackend-ubuntu-xenial-
  nv/eef11b0/logs/screen-m-shr.txt.gz?level=TRACE#_2016-10-14_15_15_19_898

  [2] https://review.openstack.org/387915

  [3]
  
https://github.com/openstack/cinder/commit/6f174b412696bfa6262a5bea3ac42f45efbbe2ce
  ( https://review.openstack.org/385122 )

  [4] http://logs.openstack.org/22/385122/1/check/gate-rally-dsvm-
  cinder/b0332e2/rally-
  plot/results.html.gz#/CinderVolumes.create_snapshot_and_attach_volume/failures

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1633535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1639914] Re: Race condition in nova compute during snapshot

2017-03-01 Thread Andrey Volkov
I believe that it's expected behavior. In nova code there is a special
case if something happens during snapshot image will be deleted.

https://github.com/openstack/nova/blob/lm_claims/nova/compute/api.py#L2730

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1639914

Title:
  Race condition in nova compute during snapshot

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When snapshot is created and immediately deleting the instance seems
  to cause race condition. I was able to re-create it on latest devstack
  on installed on 8th november

  This can be created with following commands.

  1. nova boot --flavor m1.large --image 6d4259ce-5873-42cb-8cbe-
  9873f069c149 testinstance

  id   | bef22f9b-
  ade4-48a1-86c4-b9a007897eb3

  2. nova image-create bef22f9b-ade4-48a1-86c4-b9a007897eb3 testinstance-snap ; 
nova delete bef22f9b-ade4-48a1-86c4-b9a007897eb3
  Request to delete server bef22f9b-ade4-48a1-86c4-b9a007897eb3 has been 
accepted.
  3. nova image-list doesn't show the snapshot

  4. nova list doesn't show the instance

  Nova compute log indicates a race condition while executing CLI
  commands in 2 above

  <182>1 2016-10-28T14:46:41.830208+00:00 hyper1 nova-compute 30056 - [40521 
levelname="INFO" component="nova-compute" funcname="nova.compute.manager" 
request_id="req-e9e4e899-e2a7-4bf8-bdf1-c26f5634cfda" 
user="51fa0172fbdf495e89132f7f4574e750" 
tenant="00ead348c5f9475f8940ab29cd767c5e" instance="[instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] " 
lineno="/usr/lib/python2.7/site-packages/nova/compute/manager.py:2249"] 
nova.compute.manager Terminating instance
  <183>1 2016-10-28T14:46:42.057653+00:00 hyper1 nova-compute 30056 - [40521 
levelname="DEBUG" component="nova-compute" funcname="nova.compute.manager" 
request_id="req-1c4cf749-a6a8-46af-b331-f70dc1e9f364" 
user="51fa0172fbdf495e89132f7f4574e750" 
tenant="00ead348c5f9475f8940ab29cd767c5e" instance="[instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] " 
lineno="/usr/lib/python2.7/site-packages/nova/compute/manager.py:420"] 
nova.compute.manager Cleaning up image ae9ebf4b-7dd6-4615-816f-c2f3c7c08530 
decorated_function /usr/lib/python2.7/site-packages/nova/compute/manager.py:420
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] Traceback (most recent call last):
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 416, in 
decorated_function
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] *args, **kwargs)
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3038, in 
snapshot_instance
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] task_states.IMAGE_SNAPSHOT)
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3068, in 
_snapshot_instance
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] update_task_state)
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1447, in 
snapshot
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] guest.save_memory_state()
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py", line 363, in 
save_memory_state
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] self._domain.managedSave(0)
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3] rv = execute(f, *args, **kwargs)
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
bef22f9b-ade4-48a1-86c4-b9a007897eb3]   File 
"/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
  !!!NL!!! 30056 TRACE nova.compute.manager [instance: 
be

[Yahoo-eng-team] [Bug 1626123] [NEW] Functional tests result depends on way test running

2016-09-21 Thread Andrey Volkov
Public bug reported:

Description
===
testtools.run and testtools.run --load-list work differently with nova tests.

Steps to reproduce
==
amadev@pilgrim:~/m/nova$ source .tox/functional/bin/activate
(functional) amadev@pilgrim:~/m/nova$ python -m testtools.run 
nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSample.test_create_delete_server_with_instance_update
...
Ran 1 test in 5.806s
OK
(functional) amadev@pilgrim:~/m/nova$ testr list-tests 
test_create_delete_server_with_instance_update > /tmp/nova-tests
(functional) amadev@pilgrim:~/m/nova$ python -m testtools.run discover 
--load-list /tmp/nova-tests
...
Ran 1 test in 4.689s
FAILED (failures=1)

Expected result
===
Tests shouldn't depent on the method of running.

Environment
===
upstream master

Logs & Configs
==
http://xsnippet.org/361996/

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1626123

Title:
  Functional tests result depends on way test running

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  testtools.run and testtools.run --load-list work differently with nova tests.

  Steps to reproduce
  ==
  amadev@pilgrim:~/m/nova$ source .tox/functional/bin/activate
  (functional) amadev@pilgrim:~/m/nova$ python -m testtools.run 
nova.tests.functional.notification_sample_tests.test_instance.TestInstanceNotificationSample.test_create_delete_server_with_instance_update
  ...
  Ran 1 test in 5.806s
  OK
  (functional) amadev@pilgrim:~/m/nova$ testr list-tests 
test_create_delete_server_with_instance_update > /tmp/nova-tests
  (functional) amadev@pilgrim:~/m/nova$ python -m testtools.run discover 
--load-list /tmp/nova-tests
  ...
  Ran 1 test in 4.689s
  FAILED (failures=1)

  Expected result
  ===
  Tests shouldn't depent on the method of running.

  Environment
  ===
  upstream master

  Logs & Configs
  ==
  http://xsnippet.org/361996/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1626123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595587] [NEW] Cannot save HostMapping object

2016-06-23 Thread Andrey Volkov
Public bug reported:

While I was looking at HostMapping object I found inconsistency in signature 
and call:
HostMapping._save_in_db(context, obj, updates) uses objects attributes id and 
host.
But then it is called from HostMapping.save second param is self.host which is 
not object
but just a string.

Existing test
(nova.tests.unit.objects.test_host_mapping.TestHostMappingObject.test_save)
doesn't catch this error because _save_in_db is mocked.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1595587

Title:
  Cannot save HostMapping object

Status in OpenStack Compute (nova):
  New

Bug description:
  While I was looking at HostMapping object I found inconsistency in signature 
and call:
  HostMapping._save_in_db(context, obj, updates) uses objects attributes id and 
host.
  But then it is called from HostMapping.save second param is self.host which 
is not object
  but just a string.

  Existing test
  (nova.tests.unit.objects.test_host_mapping.TestHostMappingObject.test_save)
  doesn't catch this error because _save_in_db is mocked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1595587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563295] Re: Inconsistent behaviour when unicode args used for logging

2016-05-31 Thread Andrey Volkov
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563295

Title:
  Inconsistent behaviour when unicode args used for logging

Status in oslo.log:
  Confirmed

Bug description:
  When i used unicode args for log message, i got UnicodeDecodeError
  If replace using oslo_log to logging from std python library, logging works 
fine

  [1] Sample script, logging with oslo_log module 
http://paste.openstack.org/show/492234/
  [2] Sample script, logging with Python logging module  
http://paste.openstack.org/show/492234/
  [3] Sample file with unicode data http://paste.openstack.org/show/492235/

  How to reproduce:

  1 Save [3] as 'text.txt'
  2 Run [1]

  Expected result:

  File log.txt contains
  "жлдоыфв
  фыжваофждыов"

  Actual result:

  Exception "UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in 
position 0: ordinal not in range(128)
  Logged from file oslo_test.py, line 9"

  $ pip freeze|grep oslo.log

  oslo.log==3.2.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.log/+bug/1563295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1563295] Re: Inconsistent behaviour when unicode args used for logging

2016-05-31 Thread Andrey Volkov
Can be easily reproduced with:
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
LOG.info("Oslo Logging %s", u'\u2622'.encode('utf8'))

The issue is related to implicit decoding of logging arguments due
_ensure_unicode for message:
https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L129


** Changed in: oslo.log
   Status: New => Confirmed

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1563295

Title:
  Inconsistent behaviour when unicode args used for logging

Status in oslo.log:
  Confirmed

Bug description:
  When i used unicode args for log message, i got UnicodeDecodeError
  If replace using oslo_log to logging from std python library, logging works 
fine

  [1] Sample script, logging with oslo_log module 
http://paste.openstack.org/show/492234/
  [2] Sample script, logging with Python logging module  
http://paste.openstack.org/show/492234/
  [3] Sample file with unicode data http://paste.openstack.org/show/492235/

  How to reproduce:

  1 Save [3] as 'text.txt'
  2 Run [1]

  Expected result:

  File log.txt contains
  "жлдоыфв
  фыжваофждыов"

  Actual result:

  Exception "UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in 
position 0: ordinal not in range(128)
  Logged from file oslo_test.py, line 9"

  $ pip freeze|grep oslo.log

  oslo.log==3.2.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.log/+bug/1563295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp