[Yahoo-eng-team] [Bug 1245208] [NEW] LBaaS: unit tests for radware plugin driver should not employ multithreading

2013-10-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Radware plugin driver uses task queue to perform interaction with the backend 
device.
Several operations such as lbaas objects deletion are performed in async manner.
In the unit test code actual object deletion happens in separate thread; it 
leads to a need for tricks like putting test thread to sleep.
Such unit tests are not reliable and could lead to failures that are hard to 
catch or debug.

Unit test code should be refactored in such way that it uses single-
threaded strategy to perform driver operations.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas radware
-- 
LBaaS: unit tests for radware plugin driver should not employ multithreading
https://bugs.launchpad.net/bugs/1245208
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244126] Re: neutron lb-pool-list running by admin returns also non-admin load balancer pools which appear later in horizon's admin project

2013-10-28 Thread Eugene Nikanorov
Per description of https://bugs.launchpad.net/neutron/+bug/1238293 i'm marking 
this as invalid for neutron.
I'll add horizon project to evaluate the bug there

** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1244126

Title:
  neutron lb-pool-list running by admin returns also non-admin load
  balancer pools which appear later in horizon's admin project

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Neutron (virtual network service):
  Won't Fix

Bug description:
  Version
  ===
  Havana on rhel

  Description
  ===
  neutron lb-pool-list should return the list of load balancer pools in the 
user's tenant, however when running it with admin - it prints the list of all 
tenant's pools.
  The side effect is that the horizon's Project-Load Balancers tab while 
logging-in with the admin user contains load balancers that has nothing to do 
with the admin tenant.

  # keystone tenant-list 
  +--+--+-+
  |id|   name   | enabled |
  +--+--+-+
  | abd7d9c464814aff98652c3e235a799b |  admin   |   True  |
  | e86dccb5c751465a8d338f6e3aeb8228 | services |   True  |
  | 43029e52371247ca9dc771780a8f41b5 | vlan_211 |   True  |
  | 0b3607a0807a4d928b0eab794b291198 | vlan_212 |   True  |
  | 783c402f63c94545b270177661631eac | vlan_213 |   True  |
  | 8bfe5effe4e942c2a5d4f41e46f2e09d | vlan_214 |   True  |
  +--+--+-+

  
  # neutron lb-pool-list 
  
+--+---+-+--+++
  | id   | name  | lb_method   | 
protocol | admin_state_up | status |
  
+--+---+-+--+++
  | 2c16a5cf-6ee7-4948-85cd-0faa9fc5eef4 | pool_vlan_214 | ROUND_ROBIN | HTTP   
  | True   | ACTIVE |
  
+--+---+-+--+++

  
  # neutron lb-pool-list --all-tenant
  
+--+---+-+--+++
  | id   | name  | lb_method   | 
protocol | admin_state_up | status |
  
+--+---+-+--+++
  | 2c16a5cf-6ee7-4948-85cd-0faa9fc5eef4 | pool_vlan_214 | ROUND_ROBIN | HTTP   
  | True   | ACTIVE |
  
+--+---+-+--+++

  
  # neutron lb-pool-list --tenant-id abd7d9c464814aff98652c3e235a799b
  empty output

  
  # neutron lb-pool-list --tenant-id 8bfe5effe4e942c2a5d4f41e46f2e09d
  
+--+---+-+--+++
  | id   | name  | lb_method   | 
protocol | admin_state_up | status |
  
+--+---+-+--+++
  | 2c16a5cf-6ee7-4948-85cd-0faa9fc5eef4 | pool_vlan_214 | ROUND_ROBIN | HTTP   
  | True   | ACTIVE |
  
+--+---+-+--+++

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1244126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1191069] Re: image-create fails on a boot from volume when no image ref is specified

2013-10-28 Thread Thierry Carrez
** Also affects: nova/grizzly
   Importance: Undecided
   Status: New

** Changed in: nova/grizzly
   Status: New = In Progress

** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1191069

Title:
  image-create fails on a boot from volume when no image ref is
  specified

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) grizzly series:
  In Progress

Bug description:
  If create a boot from volume like so without passing in the image
  argument

  nova boot --flavor 1 --key-name kerrin --block-device-mapping
  vda=5eb71b67-bfd2-4935-9b55-9ad92e14eae4::: bfv-vm2

  And then I try and create an image from this instance like so, I get
  the following exception.

  michael@controller:~/devstack$ nova --debug image-create 
b21282c1-02b3-4fc0-a9d1-73a19261b353 test1
  
  DEBUG (shell:768) The resource could not be found. (HTTP 404) (Request-ID: 
req-254dae37-b8c2-4e75-ac8e-3e44981bf1cc)
  Traceback (most recent call last):
File /opt/stack/python-novaclient/novaclient/shell.py, line 765, in main
  OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File /opt/stack/python-novaclient/novaclient/shell.py, line 701, in main
  args.func(self.cs, args)
File /opt/stack/python-novaclient/novaclient/v1_1/shell.py, line 1225, in 
do_image_create
  image_uuid = cs.servers.create_image(server, args.name)
File /opt/stack/python-novaclient/novaclient/v1_1/servers.py, line 704, 
in create_image
  resp = self._action('createImage', server, body)[0]
File /opt/stack/python-novaclient/novaclient/v1_1/servers.py, line 872, 
in _action
  return self.api.client.post(url, body=body)
File /opt/stack/python-novaclient/novaclient/client.py, line 233, in post
  return self._cs_request(url, 'POST', **kwargs)
File /opt/stack/python-novaclient/novaclient/client.py, line 217, in 
_cs_request
  **kwargs)
File /opt/stack/python-novaclient/novaclient/client.py, line 199, in 
_time_request
  resp, body = self.request(url, method, **kwargs)
File /opt/stack/python-novaclient/novaclient/client.py, line 193, in 
request
  raise exceptions.from_response(resp, body, url, method)
  NotFound: The resource could not be found. (HTTP 404) (Request-ID: 
req-254dae37-b8c2-4e75-ac8e-3e44981bf1cc)

  In the nova-api logs I get the following exception:

  013-06-14 18:04:28.667 ERROR nova.api.openstack 
[req-9caaa082-9e1a-43a2-9c95-2a7179cf751e admin demo] Caught error: Image  
could not be found.
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack Traceback (most recent 
call last):
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack   File 
/opt/stack/nova/nova/api/openstack/__init__.py, line 109, in __call__
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack return 
req.get_response(self.application)
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1296, in send
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack application, 
catch_exc_info=False)
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/request.py, line 1260, in 
call_application
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack   File 
/opt/stack/python-keystoneclient/keystoneclient/middleware/auth_token.py, 
line 461, in __call__
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack return 
self.app(env, start_response)
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack return 
resp(environ, start_response)
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in __call__
  2013-06-14 18:04:28.667 15808 TRACE nova.api.openstack response = 
self.app(environ, 

[Yahoo-eng-team] [Bug 1219672] Re: v3 swap volume has wrong signature

2013-10-28 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1219672

Title:
  v3 swap volume has wrong signature

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  When execute swap volume with v3 api, get error as below:

  2013-09-02 15:07:55.610 ERROR nova.api.openstack.extensions 
[req-ca269fb5-782c-4312-9b82-c9dcdf375916 admin admin] Unexpected exception in 
API method
  2013-09-02 15:07:55.610 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2013-09-02 15:07:55.610 TRACE nova.api.openstack.extensions   File 
/opt/stack/nova/nova/api/openstack/extensions.py, line 472, in wrapped
  2013-09-02 15:07:55.610 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2013-09-02 15:07:55.610 TRACE nova.api.openstack.extensions TypeError: swap() 
got an unexpected keyword argument 'id'
  2013-09-02 15:07:55.610 TRACE nova.api.openstack.extensions

  
  In 
nova.api.openstack.compute.plugins.v3.extended_volumes.ExtendedVolumesController:
  def swap(self, req, server_id, body):
  should be:
  def swap(self, req, id, body):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1219672/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1193434] Re: Supplying a port-id and min-servers results in servers without ports

2013-10-28 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1193434

Title:
  Supplying a port-id and min-servers results in servers without ports

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  If you attempt to create multiple servers in a single request, and at
  the same time specify an port-id in the requested networks, only the
  first instance to be created will get a port.

  There is no way in the API to supply a list of ports, so the API
  should reject the request in this case

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1193434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1215705] Re: extensions config_drive v3 without alias as prefix for request params

2013-10-28 Thread Thierry Carrez
** Changed in: nova
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1215705

Title:
  extensions config_drive v3 without alias as prefix for request params

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  The request for 'config_drive' should be 'os-config-
  drive:config_drive'

  And also need namespace for xml

  def server_create(self, server_dict, create_kwargs):
  create_kwargs['config_drive'] = server_dict.get('config_drive')

  def server_xml_extract_server_deserialize(self, server_node, server_dict):
  config_drive = server_node.getAttribute('config_drive')
  if config_drive:
  server_dict['config_drive'] = config_drive

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1215705/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1234015] Re: l2-pop : partial-mesh not implemented with only one VM in the network

2013-10-28 Thread Sylvain Afchain
** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1234015

Title:
  l2-pop : partial-mesh not implemented with only one VM in the network

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  with ML2/L2-population and tunneling and ovs agent, broadcast from a
  VM should only go to the host that host a VM in the same network. But
  if there is only one VM in the network, broadcast traffic is sent to
  every tunnel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1234015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245444] [NEW] failure and no ability to reset contents for reupload

2013-10-28 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

I configured swift as my glance backend. 
I have 5 data servers, 2 zones and replicas. 

CONFIG_SWIFT_STORAGE_HOSTS=10.35.XX.XX,10.35.XX.XX,10.35.XX.XX,10.35.XX.XX,10.35.XX.XX

# Number of swift storage zones, this number MUST be no bigger than
# the number of storage devices configured
CONFIG_SWIFT_STORAGE_ZONES=2

# Number of swift storage replicas, this number MUST be no bigger
# than the number of storage zones configured
CONFIG_SWIFT_STORAGE_REPLICAS=2

# FileSystem type for storage nodes
CONFIG_SWIFT_STORAGE_FSTYPE=ext4


I fail to create an image due to space issue on the data servers and when I run 
swift list I can see leftovers + errors in glance log about reset of content:

2013-10-28 13:12:39.518 4957 ERROR glance.store.swift [-] Failed to add object 
to Swift.
Got error from Swift: put_object('glance', 
'eb51f83b-7993-4e94-bba3-9ad9dc7e8525-1', ...) failure and no ability to 
reset contents for reupload.
2013-10-28 13:12:39.518 4957 ERROR glance.api.v1.upload_utils [-] Failed to 
upload image eb51f83b-7993-4e94-bba3-9ad9dc7e8525
2013-10-28 13:12:39.518 4957 TRACE glance.api.v1.upload_utils Traceback (most 
recent call last):
2013-10-28 13:12:39.518 4957 TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.6/site-packages/glance/api/v1/upload_utils.py, line 101, in 
upload_data_to_store
2013-10-28 13:12:39.518 4957 TRACE glance.api.v1.upload_utils store)
2013-10-28 13:12:39.518 4957 TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.6/site-packages/glance/store/__init__.py, line 333, in 
store_add_to_backend
2013-10-28 13:12:39.518 4957 TRACE glance.api.v1.upload_utils (location, 
size, checksum, metadata) = store.add(image_id, data, size)
2013-10-28 13:12:39.518 4957 TRACE glance.api.v1.upload_utils   File 
/usr/lib/python2.6/site-packages/glance/store/swift.py, line 441, in add
2013-10-28 13:12:39.518 4957 TRACE glance.api.v1.upload_utils raise 
glance.store.BackendException(msg)
2013-10-28 13:12:39.518 4957 TRACE glance.api.v1.upload_utils BackendException: 
Failed to add object to Swift.
2013-10-28 13:12:39.518 4957 TRACE glance.api.v1.upload_utils Got error from 
Swift: put_object('glance', 'eb51f83b-7993-4e94-bba3-9ad9dc7e8525-1', ...) 
failure and no ability to reset contents for reupload.
2013-10-28 13:12:39.518 4957 TRACE glance.api.v1.upload_utils 
(END) 


[root@opens-vdsb ~(keystone_glance)]# swift list glance 
5a2a41a3-cfb7-4ba6-80f0-3c43690cdd02-1
be662602-52e9-49cc-93f7-955847190d76
eb51f83b-7993-4e94-bba3-9ad9dc7e8525-1

** Affects: glance
 Importance: Undecided
 Status: New

-- 
failure and no ability to reset contents for reupload
https://bugs.launchpad.net/bugs/1245444
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to Glance.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1200253] Re: vmwareapi get_datastore_ref_and_name needs to be broken up into get_shared_datastores and get_local_datastores

2013-10-28 Thread Shawn Hartsock
** Changed in: nova
   Status: Incomplete = Invalid

** Changed in: nova
   Status: Invalid = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1200253

Title:
  vmwareapi get_datastore_ref_and_name needs to be broken up into
  get_shared_datastores and get_local_datastores

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  in vm_util.py method get_datastore_ref_and_name

  The method description Get the datastore list and choose the first
  local storage. has the key word and which tends to indicate that
  there is a problem in a method's design. This should probably be the
  composition of two methods Get the datastore list. and choose
  the first local storage. in the least. At best,  there should be a
  separate vSphere query to pull only the first local storage so that
  extra data is not pulled across the wire.

  See:
  
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/vm_util.py#L670

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1200253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1171930] Re: vsphere driver hardcoded to only use first datastore in cluster

2013-10-28 Thread Shawn Hartsock
Is this still a valid concern after this:(
https://review.openstack.org/#/c/52815/1/nova/virt/vmwareapi/vm_util.py
) merged?

** Changed in: nova
   Status: In Progress = Incomplete

** Changed in: nova
   Status: Incomplete = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1171930

Title:
  vsphere driver hardcoded to only use first datastore in cluster

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  this applies to havana master

  One of the biggest stumbling blocks for people using the vSphere
  driver is that is has very poor flexibility in terms of choosing which
  datastore a VM will be placed on.  It simply picks the first datastore
  the API returns.

  I see people asking for two improvements:
  - being able to choose the datastore(s) used.
  - being able to spread disk images across datastore.

  One simple mechanism that seems like it could help a lot would be if
  the user could specify a datastore_regex, and the behavior of the
  vSphere driver would be to round-robin disk images across any
  datastore in the cluster that matched this regex.  Note, if true
  round-robin is hard, random + a check for capacity would probably be a
  good approximation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1171930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1216706] Re: Migration 211 doesn't downgrade with MySQL

2013-10-28 Thread Jeffrey Zhang
This bug has been fixed by 
https://review.openstack.org/#/c/43634/
Due to the unexpected commit message, this review isn't related to this bug.

Anyway, this bug should be close manually.

** Changed in: nova
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1216706

Title:
  Migration 211 doesn't downgrade with MySQL

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Due to fkey constraint:

  Cannot drop index 'uniq_aggregate_metadata0aggregate_id0key0deleted':
  needed in a foreign key constraint

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1216706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp