[Yahoo-eng-team] [Bug 1636688] Re: 500 error while create instance with numa flavor setting

2017-02-03 Thread Sergey Nikitin
** No longer affects: nova/newton

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1636688

Title:
  500 error while create instance with numa flavor setting

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  stack@s2600wt:~/devstack$ openstack flavor create --vcpus 6  --ram 6144 
--disk 40 --property hw:numa_nodes=2  --property hw:numa_cpus.0=0,1 --property 
hw:numa_mem.0=2048 --p│·
  roperty hw:numa_cpus.1=2,3,4,5 --property hw:numa_mem.1=4096  --property 
hw:cpu_policy=shared --property hw:cpu_thread_policy=require 
test_numa_with_shared_thread_require   │·WARNING: 
openstackclient.common.utils is deprecated and will be removed after Jun 2017. 
Please use osc_lib.utils
 
│·++--+│·
  | Field  | Value  

  |│·
  
++--+│·
  | OS-FLV-DISABLED:disabled   | False  

  |│·
  | OS-FLV-EXT-DATA:ephemeral  | 0  

  |│·
  | disk   | 40 

  |│·
  | id | 94b49753-abad-4753-b0df-1699da04baa4   

  |│·
  | name   | test_numa_with_shared_thread_require   

  |│·
  | os-flavor-access:is_public | True   

  |│·
  | properties | hw:cpu_policy='shared', 
hw:cpu_thread_policy='require', hw:numa_cpus.0='0,1', hw:numa_cpus.1='2,3,4,5', 
hw:numa_mem.0='2048',|│·
  || hw:numa_mem.1='4096', hw:numa_nodes='2'

  |│·
  | ram| 6144   

  |│·
  | rxtx_factor| 1.0

  |│·
  | swap   |

  |│·
  | vcpus  | 6  

  |│·
  
++--+

  stack@s2600wt:~/devstack$ nova boot --flavor 
test_numa_with_shared_thread_require --image cirros-0.3.4-x86_64-uec test1  
│·
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
│·
   (HTTP 500) 
(Request-ID: req-5f0efa65-6545-45d1-b499-a48817720f44) 


  = API LOG==

  2016-10-26 11:53:01.735 ERROR nova.api.openstack.extensions 
[req-5f0efa65-6545-45d1-b499-a48817720f44 admin admin] Unexpected exception in 
API method│·
  2016-10-26 11:53:01.735 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):  
 │·
  2016-10-26 11:53:01.735 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 338, in wrapped
  │·2016-10-26 11:53:01.735 TRACE 
nova.api.openstack.extensions return f(*args, **kwargs) 

[Yahoo-eng-team] [Bug 1648054] [NEW] Upper constraints for 'hacking' lib doesn't work for pep8

2016-12-07 Thread Sergey Nikitin
Public bug reported:

Upper constraints for 'hacking' lib doesn't work for pep8

Couple days ago this patch was merged into Nova:
https://review.openstack.org/#/c/334048/11/nova/tests/unit/virt/libvirt/test_vif.py

It contains these lines:

   self.assertEqual(None, conf.vhost_queues)
   self.assertEqual(None, conf.driver_name)

>From pep8 point of view these lines are incorrect. Here 'assertIsNone'
should be used (rule N318)

But during merge pep8 job was 'green'. Unfortunately some number of 
contributors faced with fails of 'tox -e pep8' command in local repos because 
of these lines. It has given rise to this patch 
https://review.openstack.org/#/c/407870/ 
This patch fixes current problem but it doesn't fix such problems in future. 

The reason of green pep8 job was in 'hacking' lib. It was released 6 day
ago and new release 0.13.0 has a bug, wich causes our problem.

Commit with bug was already reverted from 'hacking' lib but new version
lib without bug wasn't released yet.

So the conclusion is: new release of 'hacking' lib breakes nova pep8 job. It 
becames false positive. 
The question is "Why new release of 'hacking' was installed?"

We have upper constraints for hacking lib in test-requirements.txt:

hacking<0.11,>=0.10.0

But we didn't use these constraints for pep8. For pep8 we only 'hacking'
dependency in tox.ini without any constraints.

https://github.com/openstack/nova/blob/master/tox.ini#L44

Because of it we install the latest version of 'hacking' each time.

I see two ways to solve it:
1) install whole test-requirements.txt for pep8. but I think it is a bad 
solution because we will install alot of useless dependencies.
2) add 'hacking' constraints to the tox.ini. It's not the best solution because 
during update of constraint in test-requirements.txt we can forget to update it 
in tox.ini. But now I don't see a better solution. If you have one - please 
share.

** Affects: nova
 Importance: Medium
 Status: New


** Tags: testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648054

Title:
  Upper constraints for 'hacking' lib doesn't work for pep8

Status in OpenStack Compute (nova):
  New

Bug description:
  Upper constraints for 'hacking' lib doesn't work for pep8

  Couple days ago this patch was merged into Nova:
  
https://review.openstack.org/#/c/334048/11/nova/tests/unit/virt/libvirt/test_vif.py

  It contains these lines:

 self.assertEqual(None, conf.vhost_queues)
 self.assertEqual(None, conf.driver_name)

  From pep8 point of view these lines are incorrect. Here 'assertIsNone'
  should be used (rule N318)

  But during merge pep8 job was 'green'. Unfortunately some number of 
contributors faced with fails of 'tox -e pep8' command in local repos because 
of these lines. It has given rise to this patch 
https://review.openstack.org/#/c/407870/ 
  This patch fixes current problem but it doesn't fix such problems in future. 

  The reason of green pep8 job was in 'hacking' lib. It was released 6
  day ago and new release 0.13.0 has a bug, wich causes our problem.

  Commit with bug was already reverted from 'hacking' lib but new
  version lib without bug wasn't released yet.

  So the conclusion is: new release of 'hacking' lib breakes nova pep8 job. It 
becames false positive. 
  The question is "Why new release of 'hacking' was installed?"

  We have upper constraints for hacking lib in test-requirements.txt:

  hacking<0.11,>=0.10.0

  But we didn't use these constraints for pep8. For pep8 we only
  'hacking' dependency in tox.ini without any constraints.

  https://github.com/openstack/nova/blob/master/tox.ini#L44

  Because of it we install the latest version of 'hacking' each time.

  I see two ways to solve it:
  1) install whole test-requirements.txt for pep8. but I think it is a bad 
solution because we will install alot of useless dependencies.
  2) add 'hacking' constraints to the tox.ini. It's not the best solution 
because during update of constraint in test-requirements.txt we can forget to 
update it in tox.ini. But now I don't see a better solution. If you have one - 
please share.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1648054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1637454] Re: 'require' CPU thread allocation policy does not always guarantee that other VMs wouldn't use current CPU core

2016-11-09 Thread Sergey Nikitin
Chris is right

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1637454

Title:
  'require' CPU thread allocation policy does not always guarantee that
  other VMs wouldn't use current CPU core

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Environment:
  NUMA node with 2 CPU cores and HyperThreading. It means that we have 4 vCPU: 
(0, 1) (2, 3) where '()' means siblings. In this case we can boot only 2 VMs 
with 'require' CPU thread allocation policy because this policy is guarantee 
that other VMs wouldn't use these cores. If we will try to boot third VM with 
'require' policy it will fail  with 'No valid host found'.

  But we still able to boot VMs with 'dedicated' CPU allocation policy
  on these cores.

  Steps to reproduce:

  1. Create Flavor with 'require' policy

  nova flavor-create numa_required 998 128 0 1
  nova flavor-key numa_required set hw:cpu_policy=dedicated
  nova flavor-key numa_required set hw:cpu_thread_policy=require

  2. Create Flavor just with 'dedicated' policy. hw:cpu_thread_policy is
  this case will be 'prefered' because this is default value.

  nova flavor-create numa_dedicated 999 128 0 1
  nova flavor-key numa_dedicated set hw:cpu_policy=dedicated

  3. Boot two VMs with 'require' policy

  nova boot nova boot --flavor numa_required --image cirros --nic net-id=*** vm1
  nova boot nova boot --flavor numa_required --image cirros --nic net-id=*** vm2

  4. Boot VM with 'decicated' policy

  nova boot nova boot --flavor numa_dedicated --image cirros --nic net-
  id=*** vm3

  Expected result:

Fail with 'No valid host found'.

  Actual result:

VM will be successfully booted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1637454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1613998] [NEW] Common API samples for server creating can't work with mciroversions > 2.37

2016-08-17 Thread Sergey Nikitin
Public bug reported:

Change https://review.openstack.org/#/c/316398/37 added new
microversion 2.37. This microversion added required field 'networks'
into create server request. By default Nova functional api tests use
samples from '/servers' directory to create a server. But now such
requests got 400 Bad Request because of missed 'networks' field.

Because of this bug changes which add new microversion will got -1 from
functional job

** Affects: nova
 Importance: Critical
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress


** Tags: testing

** Description changed:

  Change https://review.openstack.org/#/c/316398/37 added new
  microversion 2.37. This microversion added required field 'networks'
  into create server request. By default Nova functional api tests use
  samples from '/servers' directory to create a server. But now such
  requests got 400 Bad Request because of missed 'networks' field.
+ 
+ Because of this bug changes which added new microversion will got -1
+ from functional job

** Description changed:

  Change https://review.openstack.org/#/c/316398/37 added new
  microversion 2.37. This microversion added required field 'networks'
  into create server request. By default Nova functional api tests use
  samples from '/servers' directory to create a server. But now such
  requests got 400 Bad Request because of missed 'networks' field.
  
- Because of this bug changes which added new microversion will got -1
- from functional job
+ Because of this bug changes which add new microversion will got -1 from
+ functional job

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1613998

Title:
  Common API samples for server creating can't work with mciroversions >
  2.37

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Change https://review.openstack.org/#/c/316398/37 added new
  microversion 2.37. This microversion added required field 'networks'
  into create server request. By default Nova functional api tests use
  samples from '/servers' directory to create a server. But now such
  requests got 400 Bad Request because of missed 'networks' field.

  Because of this bug changes which add new microversion will got -1
  from functional job

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1613998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585214] [NEW] Cannot pin/unpin cpus during cold migration with enabled CPU pinning

2016-05-24 Thread Sergey Nikitin
Public bug reported:

With enabled cpu pinning for vm migration doesn't work properly

Steps to reproduce:
1) Deploy env with 2 compute node with enable pinning
2) Create aggregate states for this compute-node
3) Create 3 flavors:
- flavor with 2 cpu and 2 numa node
nova flavor-create m1.small.performance-2 auto 2048 20 2
nova flavor-key m1.small.performance-2 set hw:cpu_policy=dedicated
nova flavor-key m1.small.performance-2 set 
aggregate_instance_extra_specs:pinned=true
nova flavor-key m1.small.performance-2 set hw:numa_nodes=2
nova boot --image TestVM --nic net-id=93e25766-2a22-486c-af82-c62054260c26 
--flavor m1.small.performance-2 test2
- flavor with 2 cpu and 1 numa node
nova flavor-create m1.small.performance-1 auto 2048 20 2
nova flavor-key m1.small.performance-1 set hw:cpu_policy=dedicated
nova flavor-key m1.small.performance-1 set 
aggregate_instance_extra_specs:pinned=true
nova flavor-key m1.small.performance-1 set hw:numa_nodes=1
nova boot --image TestVM --nic net-id=93e25766-2a22-486c-af82-c62054260c26 
--flavor m1.small.performance-1 test3
- flavor with 1 cpu and 1 numa node
nova flavor-create m1.small.performance auto 512 1 1
nova flavor-key m1.small.performance set hw:cpu_policy=dedicated
nova flavor-key m1.small.performance set 
aggregate_instance_extra_specs:pinned=true
nova flavor-key m1.small.performance set hw:numa_nodes=1
4) boot vm1, vm2 and vm3 with this flavors
5) Migrate vm1: nova migrate vm1
Confirm resizing: nova resize-confirm vm1
Expected results:
vm1 migrate to another node
Actual resilts:
vm1 in ERROR
{"message": "Cannot pin/unpin cpus [17] from the following pinned set [3]", 
"code": 400, "created": "2016-03-31T09:26:00Z"} |
6) Migrate vm2: nova migrate vm2
Confirm resizing: nova resize-confirm vm2
Repeat one more time migration and confirmin
Expected results:
vm1 migrate to another node
Actual resilts:
vm1 in ERROR
6) nova migrate vm3 for 3 time
the same


It happening because confirm_resize() tries to clean up source host using NUMA 
topology from destination host.

** Affects: nova
     Importance: Medium
 Assignee: Sergey Nikitin (snikitin)
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585214

Title:
  Cannot pin/unpin cpus during cold migration with enabled CPU pinning

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  With enabled cpu pinning for vm migration doesn't work properly

  Steps to reproduce:
  1) Deploy env with 2 compute node with enable pinning
  2) Create aggregate states for this compute-node
  3) Create 3 flavors:
  - flavor with 2 cpu and 2 numa node
  nova flavor-create m1.small.performance-2 auto 2048 20 2
  nova flavor-key m1.small.performance-2 set hw:cpu_policy=dedicated
  nova flavor-key m1.small.performance-2 set 
aggregate_instance_extra_specs:pinned=true
  nova flavor-key m1.small.performance-2 set hw:numa_nodes=2
  nova boot --image TestVM --nic net-id=93e25766-2a22-486c-af82-c62054260c26 
--flavor m1.small.performance-2 test2
  - flavor with 2 cpu and 1 numa node
  nova flavor-create m1.small.performance-1 auto 2048 20 2
  nova flavor-key m1.small.performance-1 set hw:cpu_policy=dedicated
  nova flavor-key m1.small.performance-1 set 
aggregate_instance_extra_specs:pinned=true
  nova flavor-key m1.small.performance-1 set hw:numa_nodes=1
  nova boot --image TestVM --nic net-id=93e25766-2a22-486c-af82-c62054260c26 
--flavor m1.small.performance-1 test3
  - flavor with 1 cpu and 1 numa node
  nova flavor-create m1.small.performance auto 512 1 1
  nova flavor-key m1.small.performance set hw:cpu_policy=dedicated
  nova flavor-key m1.small.performance set 
aggregate_instance_extra_specs:pinned=true
  nova flavor-key m1.small.performance set hw:numa_nodes=1
  4) boot vm1, vm2 and vm3 with this flavors
  5) Migrate vm1: nova migrate vm1
  Confirm resizing: nova resize-confirm vm1
  Expected results:
  vm1 migrate to another node
  Actual resilts:
  vm1 in ERROR
  {"message": "Cannot pin/unpin cpus [17] from the following pinned set [3]", 
"code": 400, "created": "2016-03-31T09:26:00Z"} |
  6) Migrate vm2: nova migrate vm2
  Confirm resizing: nova resize-confirm vm2
  Repeat one more time migration and confirmin
  Expected results:
  vm1 migrate to another node
  Actual resilts:
  vm1 in ERROR
  6) nova migrate vm3 for 3 time
  the same

  
  It happening because confirm_resize() tries to clean up source host using 
NUMA topology from destination host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1585214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582278] [NEW] [SR-IOV][CPU Pinning] nova compute can try to boot VM with CPUs from one NUMA node and PCI device from another NUMA node.

2016-05-16 Thread Sergey Nikitin
 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] six.reraise(*exc_info)
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1570, in 
_allocate_network_async
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] bind_host_id=bind_host_id)
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566]   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 666, in 
allocate_for_instance
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] self._delete_ports(neutron, instance, 
created_port_ids)
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] self.force_reraise()
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566]   File 
"/usr/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] six.reraise(self.type_, self.value, 
self.tb)
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566]   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 645, in 
allocate_for_instance
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] bind_host_id=bind_host_id)
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566]   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 738, in 
_populate_neutron_extension_values
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] port_req_body)
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566]   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 709, in 
_populate_neutron_binding_profile
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] instance, pci_request_id).pop()
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] IndexError: pop from empty list
2016-05-13 08:25:57.937 29097 ERROR nova.compute.manager [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566]
2016-05-13 08:25:57.939 29097 INFO nova.compute.manager 
[req-26138c0b-fa55-4ff8-8f3a-aad980e3c815 d864c4308b104454b7b46fb652f4f377 
9322dead0b5d440986b12596d9cbff5b - - -] [instance: 
4e691469-893d-4b24-a0a8-00bbee0fa566] Terminating instance

The problem is in nova/compute/resource_tracker.py. In method
instance_claim():

claim = claims.Claim(context, instance_ref, self, self.compute_node,
 overhead=overhead, limits=limits)
if self.pci_tracker:
  self.pci_tracker.claim_instance(context, instance_ref)

  instance_ref.numa_topology = claim.claimed_numa_topology
  self._set_instance_host_and_node(instance_ref)

1) here nova create a claim with correct NUMA node with CPU pinning and PCI 
devices (in our case it is node-1)
2) nova call pci_tracker.claim_instance() with instance_ref BUT instance_ref 
does not contain information about needed NUMA node. That is why in 
claim_instance we choose node-0. In this case we can't associate requested PCI 
devices with the instance because these devices are associated with node-1.
3) nova put to the instance_ref correct numa node-1 from step 1.
4) we got an instance without PCI devices.

** Affects: nova
 Importance: Medium
 Assignee: Sergey Nikitin (snikitin)
 Status: Confirmed

** Changed in: nova
   Status: New => Confirmed

** Description changed:

  Environment:
  Two NUMA nodes on compute host (node-0 and node-1).
  One SR-IOV PCI device associated with NUMA node-1.
  
  Steps to reproduce:
  
  Steps to reproduce:
-  1) Deploy env with SR-IOV and CPU pinning enable
-  2) Create new flavor with cpu pinning:
+  1) Deploy env with SR-IOV and CPU pinning enable
+  2) Create new flavor with cpu pinning:
  nova flavor-show m1.small.performance
  
++---+
  | Property | Value |
  
++---+
 

[Yahoo-eng-team] [Bug 1516546] [NEW] Tag has no property 'project_id' during filtering by tags

2015-11-16 Thread Sergey Nikitin
Public bug reported:

When we use filter_by() fields for filtering are extracted from the
primary entity of the query, or the last entity that was the target of a
call to Query.join().

If db query contains filters 'project_id' and 'tag' (or 'tag-any') db
error will be raised:

  File "nova/db/api.py", line 677, in instance_get_all_by_filters
use_slave=use_slave)
  File "nova/db/sqlalchemy/api.py", line 204, in wrapper
return f(*args, **kwargs)
  File "nova/db/sqlalchemy/api.py", line 1877, in 
instance_get_all_by_filters
sort_dirs=[sort_dir])
  File "nova/db/sqlalchemy/api.py", line 204, in wrapper
return f(*args, **kwargs)
  File "nova/db/sqlalchemy/api.py", line 2046, in 
instance_get_all_by_filters_sort
filters, exact_match_filter_names)
  File "nova/db/sqlalchemy/api.py", line 2220, in _exact_instance_filter
query = query.filter_by(**filter_dict)
  File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 1345, in filter_by
for key, value in kwargs.items()]
  File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/base.py",
 line 383, in _entity_descriptor
(description, key)
sqlalchemy.exc.InvalidRequestError: Entity '' has no property 'project_id'

It happens because we use join(models.Tag) to filter instances by tags.
Sqlalchemy try to find 'project_id' field in Tag model. To fix this
issue we should use filter() instead of filter_by(). filter() work fine
with join().

** Affects: nova
 Importance: Medium
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1516546

Title:
  Tag has no property 'project_id' during filtering by tags

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When we use filter_by() fields for filtering are extracted from the
  primary entity of the query, or the last entity that was the target of
  a call to Query.join().

  If db query contains filters 'project_id' and 'tag' (or 'tag-any') db
  error will be raised:

File "nova/db/api.py", line 677, in instance_get_all_by_filters
  use_slave=use_slave)
File "nova/db/sqlalchemy/api.py", line 204, in wrapper
  return f(*args, **kwargs)
File "nova/db/sqlalchemy/api.py", line 1877, in 
instance_get_all_by_filters
  sort_dirs=[sort_dir])
File "nova/db/sqlalchemy/api.py", line 204, in wrapper
  return f(*args, **kwargs)
File "nova/db/sqlalchemy/api.py", line 2046, in 
instance_get_all_by_filters_sort
  filters, exact_match_filter_names)
File "nova/db/sqlalchemy/api.py", line 2220, in _exact_instance_filter
  query = query.filter_by(**filter_dict)
File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 1345, in filter_by
  for key, value in kwargs.items()]
File 
"/opt/stack/nova/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/orm/base.py",
 line 383, in _entity_descriptor
  (description, key)
  sqlalchemy.exc.InvalidRequestError: Entity '' has no property 'project_id'

  It happens because we use join(models.Tag) to filter instances by
  tags. Sqlalchemy try to find 'project_id' field in Tag model. To fix
  this issue we should use filter() instead of filter_by(). filter()
  work fine with join().

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1516546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506786] [NEW] Incorrect name of 'tag' and 'tag-any' filters

2015-10-16 Thread Sergey Nikitin
Public bug reported:

According to spec http://specs.openstack.org/openstack/nova-
specs/specs/mitaka/approved/tag-instances.html these filters should
named 'tags' and 'tags-any'

** Affects: nova
 Importance: Low
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506786

Title:
  Incorrect name of 'tag' and 'tag-any' filters

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  According to spec http://specs.openstack.org/openstack/nova-
  specs/specs/mitaka/approved/tag-instances.html these filters should
  named 'tags' and 'tags-any'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506786/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1502107] [NEW] Method 'get_all' from servicegroup API doesn't implemented in drivers

2015-10-02 Thread Sergey Nikitin
Public bug reported:

Change Idc0dfbbe1887e11166acb7d989dd5466751761af removed get_all method
from mc and db drivers, and made this method private in zk driver. Now
no one of servicegroup drivers (mc, db, zk) doesn't implement method
get_all. And it's very confusing. This method used nowhere so it should
be removed.

** Affects: nova
 Importance: Low
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1502107

Title:
  Method 'get_all' from servicegroup API doesn't implemented in drivers

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Change Idc0dfbbe1887e11166acb7d989dd5466751761af removed get_all
  method from mc and db drivers, and made this method private in zk
  driver. Now no one of servicegroup drivers (mc, db, zk) doesn't
  implement method get_all. And it's very confusing. This method used
  nowhere so it should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1502107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461473] Re: Unable to call neutron v2 API from nova: AuthorizationFailure: No valid authentication is available

2015-08-05 Thread Sergey Nikitin
** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461473

Title:
  Unable to call neutron v2 API from nova: AuthorizationFailure: No
  valid authentication is available

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When I call neutron v2 API from nova driver, it reported
  AuthorizationFailure, I've no idea about how this happen and in
  previous verion, there is no problem:

  Traceback (most recent call last):
    File /opt/mydriver/compute/manager.py, line 2006, in _fix_instance_nw_info
  data = self.network_api.list_ports(context, **search_opts)
    File /usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py, line 
724, in list_ports
  return get_client(context).list_ports(**search_opts)
    File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
102, in with_params
  ret = self.function(instance, *args, **kwargs)
    File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
535, in list_ports
  **_params)
    File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
307, in list
  for r in self._pagination(collection, path, **params):
    File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
321, in _pagination
  res = self.get(path, params=params)
    File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
293, in get
  headers=headers, params=params)
    File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
270, in retry_request
  headers=headers, params=params)
    File /usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py, line 
200, in do_request
  content_type=self.content_type())
    File /usr/lib/python2.7/site-packages/neutronclient/client.py, line 306, 
in do_request
  return self.request(url, method, **kwargs)
    File /usr/lib/python2.7/site-packages/neutronclient/client.py, line 294, 
in request
  resp = super(SessionClient, self).request(*args, **kwargs)
    File /usr/lib/python2.7/site-packages/keystoneclient/adapter.py, line 95, 
in request
  return self.session.request(url, method, **kwargs)
    File /usr/lib/python2.7/site-packages/keystoneclient/utils.py, line 318, 
in inner
  return func(*args, **kwargs)
    File /usr/lib/python2.7/site-packages/keystoneclient/session.py, line 
317, in request
  raise exceptions.AuthorizationFailure(msg)
  AuthorizationFailure: No valid authentication is available

  I checked the /etc/nova/nova.conf and /etc/neutron/neutron.conf, it shows:
  nova.conf:
  [keystone_authtoken]
  auth_uri = http://myip:5000/v2.0
  identity_uri = http://myip:35357/
  auth_version = v2.0
  admin_tenant_name = service
  admin_user = nova
  admin_password = W0lCTTp2MV1iY3JhZmducHgtcGJ6Y2hncg==
  signing_dir = /var/cache/nova/api
  hash_algorithms = md5
  insecure = true

  [neutron]
  url = http://myip:9696
  insecure = true
  auth_strategy = keystone
  admin_tenant_name = service
  admin_username = neutron
  admin_password = W0lCTTp2MV1iY3JhZmducHgtYXJnamJleA==
  admin_auth_url = http://myip:5000/v2.0
  timeout = 30
  region_name =
  ovs_bridge = br-int
  extension_sync_interval = 600
  cafile =
  service_metadata_proxy = true
  metadata_proxy_shared_secret = W0lCTTp2MV1hcmhnZWJhX3pyZ25xbmduX2ZycGVyZw==

  neutron.conf:
  [keystone_authtoken]
  auth_uri = http://myip:5000/v2.0
  identity_uri = http://myip:35357/
  auth_version = v2.0
  admin_tenant_name = service
  admin_user = neutron
  admin_password = W0lCTTp2MV1iY3JhZmducHgtYXJnamJleA==
  signing_dir = /var/lib/neutron/keystone-signing
  hash_algorithms = md5
  insecure = false

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461473/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480226] [NEW] SAWarning: The IN-predicate on tags.tag was invoked with an empty sequence

2015-07-31 Thread Sergey Nikitin
Public bug reported:

When the 'to_delete' list of instance tags in db method
instance_tag_set() is empty, warnings are printed in the nova logs:

SAWarning: The IN-predicate on tags.tag was invoked with an empty
sequence. This results in a contradiction, which nonetheless can be
expensive to evaluate. Consider alternative strategies for improved
performance.

The fix is to not query the DB in that case.

** Affects: nova
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480226

Title:
  SAWarning: The IN-predicate on tags.tag was invoked with an empty
  sequence

Status in OpenStack Compute (nova):
  New

Bug description:
  When the 'to_delete' list of instance tags in db method
  instance_tag_set() is empty, warnings are printed in the nova logs:

  SAWarning: The IN-predicate on tags.tag was invoked with an empty
  sequence. This results in a contradiction, which nonetheless can be
  expensive to evaluate. Consider alternative strategies for improved
  performance.

  The fix is to not query the DB in that case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1480226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475663] [NEW] Incorrect behaviour of method _check_instance_exists

2015-07-17 Thread Sergey Nikitin
Public bug reported:

This method must check instance existence in CURRENT token. But now it
checks instance existence in ANY token. It happens because parameter
token_only in sqlalchemy query was missed.

** Affects: nova
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475663

Title:
  Incorrect behaviour of method _check_instance_exists

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  This method must check instance existence in CURRENT token. But now it
  checks instance existence in ANY token. It happens because parameter
  token_only in sqlalchemy query was missed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475663/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425485] [NEW] Method sqlalchemy.api._check_instance_exists has incorrect behavior

2015-02-25 Thread Sergey Nikitin
Public bug reported:

This method must raise InstanceNotFound exception if there is no
instance with specified UUID. But now it raises exception only if we
have no instances at all. It happens because filter with UUID in
sqlalchemy query was missed.

** Affects: nova
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1425485

Title:
  Method sqlalchemy.api._check_instance_exists has incorrect behavior

Status in OpenStack Compute (Nova):
  New

Bug description:
  This method must raise InstanceNotFound exception if there is no
  instance with specified UUID. But now it raises exception only if we
  have no instances at all. It happens because filter with UUID in
  sqlalchemy query was missed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1425485/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401486] [NEW] Incorrect initialisation of tests for extended_availability_zones V21 API extension

2014-12-11 Thread Sergey Nikitin
Public bug reported:

In test case ExtendedAvailabilityZoneTestV21 we initialize ALL API
extension instead of one (extended_availability_zone extension).

here:
https://github.com/openstack/nova/blob/c3f3dc012ae3938b6f116491273a4eef0acfab83/nova/tests/unit/api/openstack/compute/contrib/test_extended_availability_zone.py#L96

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1401486

Title:
  Incorrect initialisation of tests for extended_availability_zones V21
  API extension

Status in OpenStack Compute (Nova):
  New

Bug description:
  In test case ExtendedAvailabilityZoneTestV21 we initialize ALL API
  extension instead of one (extended_availability_zone extension).

  here:
  
https://github.com/openstack/nova/blob/c3f3dc012ae3938b6f116491273a4eef0acfab83/nova/tests/unit/api/openstack/compute/contrib/test_extended_availability_zone.py#L96

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1401486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387812] [NEW] Hypervisor summary shows incorrect total storage

2014-10-30 Thread Sergey Nikitin
Public bug reported:

On Horizon UI in Admin/Hypervisors, Disk Usage shows incorrect value.

Since using Ceph for ephemeral storage it adds up the ceph storage seen in each 
storage node rather than just using the real amount of ceph storage.
When we use Ceph we should divide sum of storage sizes by the replication 
factor of ceph storage. (Replication factor is a number  which tells how much 
times information into the Ceph storage would be duplicated).
For example we have 3 nodes and each node has 60 Gb storage.
Replication factor is 2. So total storage is 60 * 3 / 2 = 90.
But now size of a total storage is calculating as 60 + 60 + 60 = 180.  See the 
screenshot (the real size of storage is 207 Tb.)
So if type storage is Ceph, we should ask information about size of storage 
directly from Ceph.

** Affects: nova
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: New

** Attachment added: hypervisor_summary.png
   
https://bugs.launchpad.net/bugs/1387812/+attachment/4249350/+files/hypervisor_summary.png

** Changed in: nova
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387812

Title:
  Hypervisor summary shows incorrect total storage

Status in OpenStack Compute (Nova):
  New

Bug description:
  On Horizon UI in Admin/Hypervisors, Disk Usage shows incorrect value.

  Since using Ceph for ephemeral storage it adds up the ceph storage seen in 
each storage node rather than just using the real amount of ceph storage.
  When we use Ceph we should divide sum of storage sizes by the replication 
factor of ceph storage. (Replication factor is a number  which tells how much 
times information into the Ceph storage would be duplicated).
  For example we have 3 nodes and each node has 60 Gb storage.
  Replication factor is 2. So total storage is 60 * 3 / 2 = 90.
  But now size of a total storage is calculating as 60 + 60 + 60 = 180.  See 
the screenshot (the real size of storage is 207 Tb.)
  So if type storage is Ceph, we should ask information about size of storage 
directly from Ceph.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387812/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369858] [NEW] There is no migration test for migration #254

2014-09-15 Thread Sergey Nikitin
Public bug reported:

In change request  https://review.openstack.org/#/c/114286/ migration
254 was added. But we have no migration test for it.

** Affects: nova
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369858

Title:
  There is no migration test for migration #254

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  In change request  https://review.openstack.org/#/c/114286/ migration
  254 was added. But we have no migration test for it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1335859] [NEW] Wrong assert in nova.tests.virt.vmwareapi.test_vmops.py

2014-06-30 Thread Sergey Nikitin
Public bug reported:

bad assertion in nova.tests.virt.vmwareapi.vmwareapi.test_vmops.py:640

self.assertTrue(3, len(mock_mkdir.mock_calls)) should be replaced with
assertEqual

** Affects: nova
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1335859

Title:
  Wrong assert in nova.tests.virt.vmwareapi.test_vmops.py

Status in OpenStack Compute (Nova):
  New

Bug description:
  bad assertion in nova.tests.virt.vmwareapi.vmwareapi.test_vmops.py:640

  self.assertTrue(3, len(mock_mkdir.mock_calls)) should be replaced with
  assertEqual

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1335859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323489] [NEW] list_user_ids_for_project returns dumb member in LDAP backend

2014-05-27 Thread Sergey Nikitin
Public bug reported:

In case of returning members as attributes method list_user_ids_for_project 
returns dumb member.
In this case members returning in the follow form:
 {'member': [u'CN=dumb,DC=nonexistent', 
u'CN=d26520a7dbb64bb791d1b1c6759fddf6,OU=Users,CN=example,CN=com']}
Check that this member is dumb return an incorrect result because dumb member 
dn is 'cn=dumb,dc=nonexistent', not 'CN=dumb,DC=nonexistent'.

** Affects: keystone
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1323489

Title:
  list_user_ids_for_project returns dumb member in LDAP backend

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  In case of returning members as attributes method list_user_ids_for_project 
returns dumb member.
  In this case members returning in the follow form:
   {'member': [u'CN=dumb,DC=nonexistent', 
u'CN=d26520a7dbb64bb791d1b1c6759fddf6,OU=Users,CN=example,CN=com']}
  Check that this member is dumb return an incorrect result because dumb member 
dn is 'cn=dumb,dc=nonexistent', not 'CN=dumb,DC=nonexistent'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1323489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1318927] [NEW] Wrong behaviors when updating project with LDAP backends and turned on emulation parameter 'enabled' of tenant

2014-05-13 Thread Sergey Nikitin
Public bug reported:

Update request returns old value of field 'enabled' when updating tenant. 
This bug is reproduced when using LDAP backends and when flag 
'tenant_enabled_emulation' is True.

** Affects: keystone
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: New

** Description changed:

- Update request returns old value of field 'enabled' when updating
- tenant.
+ Update request returns old value of field 'enabled' when updating tenant. 
+ This bug is reproduced when using LDAP backends and when flag 
'tenant_enabled_emulation' is True.

** Changed in: keystone
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1318927

Title:
  Wrong behaviors when updating project with LDAP backends and turned on
  emulation parameter 'enabled' of tenant

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Update request returns old value of field 'enabled' when updating tenant. 
  This bug is reproduced when using LDAP backends and when flag 
'tenant_enabled_emulation' is True.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1318927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316128] [NEW] Move code in ldap which delete elements of tree to common method

2014-05-05 Thread Sergey Nikitin
Public bug reported:

Code which uses 'search_s' and 'delete_s' in ldap backends should be moved to a 
new method to remove duplication of code.
Also should be added handling of ldap.NO_SUCH_OBJECT exception because now it's 
just ignored.

** Affects: keystone
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1316128

Title:
  Move code in ldap which delete elements of tree to common method

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  Code which uses 'search_s' and 'delete_s' in ldap backends should be moved to 
a new method to remove duplication of code.
  Also should be added handling of ldap.NO_SUCH_OBJECT exception because now 
it's just ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1316128/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310952] [NEW] Not assignment variable 'image' in api.v2.image_data

2014-04-22 Thread Sergey Nikitin
Public bug reported:

Raise UnboundLocalError in 'try expect' block while catching exception
using a var 'image'.

File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 393, in assertRaises
self.assertThat(our_callable, matcher)
  File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 404, in assertThat
mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 454, in _matchHelper
mismatch = matcher.match(matchee)
  File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
mismatch = self.exception_matcher.match(exc_info)
  File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
mismatch = matcher.match(matchee)
  File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 385, in match
reraise(*matchee)
  File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
result = matchee()
  File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 902, in __call__
return self._callable_object(*self._args, **self._kwargs)
  File glance/common/utils.py, line 437, in wrapped
return func(self, req, *args, **kwargs)
  File glance/api/v2/image_data.py, line 120, in upload
self._restore(image_repo, image)
UnboundLocalError: local variable 'image' referenced before assignment

** Affects: glance
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1310952

Title:
  Not assignment variable 'image' in api.v2.image_data

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  Raise UnboundLocalError in 'try expect' block while catching exception
  using a var 'image'.

  File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 393, in assertRaises
  self.assertThat(our_callable, matcher)
File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 404, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 454, in _matchHelper
  mismatch = matcher.match(matchee)
File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py,
 line 62, in match
  mismatch = matcher.match(matchee)
File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 385, in match
  reraise(*matchee)
File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py,
 line 101, in match
  result = matchee()
File 
/opt/stack/glance/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 902, in __call__
  return self._callable_object(*self._args, **self._kwargs)
File glance/common/utils.py, line 437, in wrapped
  return func(self, req, *args, **kwargs)
File glance/api/v2/image_data.py, line 120, in upload
  self._restore(image_repo, image)
  UnboundLocalError: local variable 'image' referenced before assignment

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1310952/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1305056] [NEW] Impossible to use method search_s in BaseLdap if attribute 'page_size' is not 0.

2014-04-09 Thread Sergey Nikitin
Public bug reported:

/opt/stack/keystone/keystone/common/ldap/core.py #493-520

def search_s(self, base, scope,
 filterstr='(objectClass=*)', attrlist=None, attrsonly=0):
# NOTE(morganfainberg): Remove None singletons from this list, which
# allows us to set mapped attributes to None as defaults in config.
# Without this filtering, the ldap query would raise a TypeError since
# attrlist is expected to be an iterable of strings.
if attrlist is not None:
attrlist = [attr for attr in attrlist if attr is not None]
LOG.debug('LDAP search: base=%s scope=%s filterstr=%s '
  'attrs=%s attrsonly=%s',
  base, scope, filterstr, attrlist, attrsonly)
if self.page_size:
ldap_result = self._paged_search_s(base, scope,
   filterstr, attrlist)
else:
base_utf8 = utf8_encode(base)
filterstr_utf8 = utf8_encode(filterstr)
if attrlist is None:
attrlist_utf8 = None
else:
attrlist_utf8 = map(utf8_encode, attrlist)
ldap_result = self.conn.search_s(base_utf8, scope,
 filterstr_utf8,
 attrlist_utf8, attrsonly)

py_result = convert_ldap_result(ldap_result)

return py_result

Variable 'py_result' can be not initialized if self.page_size  0
because it's initialized only in the 'else' block.

** Affects: keystone
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1305056

Title:
  Impossible to use method search_s in BaseLdap if attribute 'page_size'
  is not 0.

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  /opt/stack/keystone/keystone/common/ldap/core.py #493-520

  def search_s(self, base, scope,
   filterstr='(objectClass=*)', attrlist=None, attrsonly=0):
  # NOTE(morganfainberg): Remove None singletons from this list, which
  # allows us to set mapped attributes to None as defaults in config.
  # Without this filtering, the ldap query would raise a TypeError since
  # attrlist is expected to be an iterable of strings.
  if attrlist is not None:
  attrlist = [attr for attr in attrlist if attr is not None]
  LOG.debug('LDAP search: base=%s scope=%s filterstr=%s '
'attrs=%s attrsonly=%s',
base, scope, filterstr, attrlist, attrsonly)
  if self.page_size:
  ldap_result = self._paged_search_s(base, scope,
 filterstr, attrlist)
  else:
  base_utf8 = utf8_encode(base)
  filterstr_utf8 = utf8_encode(filterstr)
  if attrlist is None:
  attrlist_utf8 = None
  else:
  attrlist_utf8 = map(utf8_encode, attrlist)
  ldap_result = self.conn.search_s(base_utf8, scope,
   filterstr_utf8,
   attrlist_utf8, attrsonly)

  py_result = convert_ldap_result(ldap_result)

  return py_result

  Variable 'py_result' can be not initialized if self.page_size  0
  because it's initialized only in the 'else' block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1305056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1264089] [NEW] Wrong string format in exception in glance.api.v2.image_data

2013-12-25 Thread Sergey Nikitin
Public bug reported:

/glance/api/v2/image_data.py #103-107

except exception.ImageSizeLimitExceeded as e:
msg = _(The incoming image is too large: %) % e
LOG.error(msg)
raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
  request=req)

In message character 's' is missing.

** Affects: glance
 Importance: Undecided
 Assignee: Sergey Nikitin (snikitin)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) = Sergey Nikitin (snikitin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1264089

Title:
  Wrong string format in exception in glance.api.v2.image_data

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress

Bug description:
  /glance/api/v2/image_data.py #103-107

  except exception.ImageSizeLimitExceeded as e:
  msg = _(The incoming image is too large: %) % e
  LOG.error(msg)
  raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
request=req)

  In message character 's' is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1264089/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp