[Yahoo-eng-team] [Bug 1317082] Re: LBaaS. When a Vip is created enable subnet selection

2014-08-19 Thread Gary W. Smith
*** This bug is a duplicate of bug 1285504 ***
https://bugs.launchpad.net/bugs/1285504

** This bug has been marked a duplicate of bug 1285504
   lbaas add vip should allow vip from a different subnet

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1317082

Title:
  LBaaS. When a Vip is created enable subnet selection

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When a Vip is created the user would like to be able to specify a Subnet for 
the Vip when it is different than the Pool's subnet.
  The current implementation uses the Pool Subnet and the user is not able to 
specify a diffrent subnet for the Vip.
  It is noted, that the LBaaS API for creating VIP DOES support specifying a 
different Vip subnet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1317082/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359000] Re: Horizon orchestration stacks table repeats stack

2014-08-19 Thread Richard Jones
This bug goes deep down the rabbit-hole. All the way deep inside heat,
as it turns out, where the actual bug is. Will move this activity over
there.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359000

Title:
  Horizon orchestration stacks table repeats stack

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Creating a stack in Orchestration currently results in the Stacks
  table repeating the single stack ad infinitum. The template used to
  create the stack is:

  # This is a hello world HOT template just defining a single compute instance
  heat_template_version: 2013-05-23

  description: >
    HOT template that just defines single compute instance.

  parameters:
    flavor:
  type: string
  description: Instance type for the instance to be created
  default: m1.nano
  constraints:
    - allowed_values: [m1.nano, m1.micro, m1.tiny, m1.small, m1.large]
  description: Value must be one of 'm1.nano', 'm1.micro', 'm1.tiny', 
'm1.small' or 'm1.large'
    image:
  type: string
  description: name of the image to use for the instance
  default: cirros-0.3.2-x86_64-uec

  resources:
    my_instance:
  type: OS::Nova::Server
  properties:
    image: { get_param: image }
    flavor: { get_param: flavor }

  outputs:
    instance_ip:
  description: The IP address of the deployed instance
  value: { get_attr: [my_instance, first_address] }

  
  Creating a stack with this template results in the following nova client 
output:

  richard@devstack:~/devstack$ heat stack-list
  
+--++-+--+
  | id   | stack_name | stack_status| 
creation_time|
  
+--++-+--+
  | 74f1c88e-68e8-4864-9f90-f7f206bc8a38 | test2  | CREATE_COMPLETE | 
2014-08-20T01:54:04Z |
  
+--++-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359035] [NEW] Unable to feel significance of "nova net-create"

2014-08-19 Thread Mh Raies
Public bug reported:

When I create network using -

#nova net-create  Test-net 10.0.0.0/8

follwing error is received -

ERROR (ClientException): Create networks failed (HTTP 503) (Request-ID:
req-00cca4f8-ec13-44b0-99ac-05573c1da49b)


nova-api-logs are as --->

2014-08-20 10:21:26.412 ERROR 
nova.api.openstack.compute.contrib.os_tenant_networks 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] Create networks failed
2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks Traceback (most recent 
call last):
2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/os_tenant_networks.py", 
line 184, in create
2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks label=label, **kwargs)
2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
"/opt/stack/nova/nova/network/base_api.py", line 97, in create
2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks raise 
NotImplementedError()
2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks NotImplementedError
2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks 
2014-08-20 10:21:26.439 INFO nova.api.openstack.wsgi 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] HTTP exception thrown: 
Create networks failed
2014-08-20 10:21:26.440 DEBUG nova.api.openstack.wsgi 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] Returning 503 to user: 
Create networks failed __call__ /opt/stack/nova/nova/api/openstack/wsgi.py:1200
2014-08-20 10:21:26.440 INFO nova.osapi_compute.wsgi.server 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] 10.0.9.49 "POST 
/v2/6a1118be3e51427384bcebade69e1703/os-tenant-networks HTTP/1.1" status: 503 
len: 278 time: 0.1678212


Also similar kind of bug was raised -
https://bugs.launchpad.net/nova/+bug/1172173


But if one can not create a network using cli "nova net-create" as of above 
reported bug then waht is the significance of having this CLI.

This should be removed.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359035

Title:
  Unable to feel significance of "nova net-create"

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I create network using -

  #nova net-create  Test-net 10.0.0.0/8

  follwing error is received -

  ERROR (ClientException): Create networks failed (HTTP 503) (Request-
  ID: req-00cca4f8-ec13-44b0-99ac-05573c1da49b)


  nova-api-logs are as --->

  2014-08-20 10:21:26.412 ERROR 
nova.api.openstack.compute.contrib.os_tenant_networks 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] Create networks failed
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks Traceback (most recent 
call last):
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
"/opt/stack/nova/nova/api/openstack/compute/contrib/os_tenant_networks.py", 
line 184, in create
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks label=label, **kwargs)
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks   File 
"/opt/stack/nova/nova/network/base_api.py", line 97, in create
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks raise 
NotImplementedError()
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks NotImplementedError
  2014-08-20 10:21:26.412 5126 TRACE 
nova.api.openstack.compute.contrib.os_tenant_networks 
  2014-08-20 10:21:26.439 INFO nova.api.openstack.wsgi 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] HTTP exception thrown: 
Create networks failed
  2014-08-20 10:21:26.440 DEBUG nova.api.openstack.wsgi 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] Returning 503 to user: 
Create networks failed __call__ /opt/stack/nova/nova/api/openstack/wsgi.py:1200
  2014-08-20 10:21:26.440 INFO nova.osapi_compute.wsgi.server 
[req-00cca4f8-ec13-44b0-99ac-05573c1da49b admin admin] 10.0.9.49 "POST 
/v2/6a1118be3e51427384bcebade69e1703/os-tenant-networks HTTP/1.1" status: 503 
len: 278 time: 0.1678212


  
  Also similar kind of bug was raised - 
https://bugs.launchpad.net/nova/+bug/1172173

  
  But if one can not create a network using cli "nova net-create" as of above 
reported bug then waht is the significance of having this CLI.

  This should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359035/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-

[Yahoo-eng-team] [Bug 1359031] [NEW] xapi unplug VBD fail for 11 times when boot vm first time.

2014-08-19 Thread Richard Lee
Public bug reported:

when i first boot a vm in xenserver, i failed and get the log:
2014-08-20 12:36:44.161 4352 AUDIT nova.compute.manager 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] [instance: 
3ea9d91c-6997-4fd1-bed5-297a4712bac6] Starting instance...
2014-08-20 12:36:44.259 4352 AUDIT nova.compute.claims 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] [instance: 
3ea9d91c-6997-4fd1-bed5-297a4712bac6] Attempting claim: memory 521 MB, disk 1 
GB, VCPUs 1
2014-08-20 12:36:44.260 4352 AUDIT nova.compute.claims 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] [instance: 
3ea9d91c-6997-4fd1-bed5-297a4712bac6] Total memory: 32737 MB, used: 512.00 MB
2014-08-20 12:36:44.260 4352 AUDIT nova.compute.claims 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] [instance: 
3ea9d91c-6997-4fd1-bed5-297a4712bac6] memory limit: 49105.50 MB, free: 48593.50 
MB
2014-08-20 12:36:44.260 4352 AUDIT nova.compute.claims 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] [instance: 
3ea9d91c-6997-4fd1-bed5-297a4712bac6] Total disk: 909 GB, used: 0.00 GB
2014-08-20 12:36:44.260 4352 AUDIT nova.compute.claims 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] [instance: 
3ea9d91c-6997-4fd1-bed5-297a4712bac6] disk limit not specified, defaulting to 
unlimited
2014-08-20 12:36:44.261 4352 AUDIT nova.compute.claims 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] [instance: 
3ea9d91c-6997-4fd1-bed5-297a4712bac6] Total CPUs: 4 VCPUs, used: 0.00 VCPUs
2014-08-20 12:36:44.261 4352 AUDIT nova.compute.claims 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] [instance: 
3ea9d91c-6997-4fd1-bed5-297a4712bac6] CPUs limit not specified, defaulting to 
unlimited
2014-08-20 12:36:44.261 4352 AUDIT nova.compute.claims 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] [instance: 
3ea9d91c-6997-4fd1-bed5-297a4712bac6] Claim successful
2014-08-20 12:36:47.317 4352 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2014-08-20 12:36:47.791 4352 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 31704.0
2014-08-20 12:36:47.791 4352 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 908
2014-08-20 12:36:47.791 4352 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 3
2014-08-20 12:36:47.845 4352 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for compute2:openstack
2014-08-20 12:36:56.278 4352 INFO nova.virt.xenapi.vm_utils 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] VBD 
OpaqueRef:cafa4730-c623-9c60-a3cb-734e40722e85 uplug failed with 
"DEVICE_DETACH_REJECTED", attempt 1/11
2014-08-20 12:36:57.318 4352 INFO nova.virt.xenapi.vm_utils 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] VBD 
OpaqueRef:cafa4730-c623-9c60-a3cb-734e40722e85 uplug failed with 
"DEVICE_DETACH_REJECTED", attempt 2/11
2014-08-20 12:36:58.349 4352 INFO nova.virt.xenapi.vm_utils 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] VBD 
OpaqueRef:cafa4730-c623-9c60-a3cb-734e40722e85 uplug failed with 
"DEVICE_DETACH_REJECTED", attempt 3/11
2014-08-20 12:36:59.392 4352 INFO nova.virt.xenapi.vm_utils 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] VBD 
OpaqueRef:cafa4730-c623-9c60-a3cb-734e40722e85 uplug failed with 
"DEVICE_DETACH_REJECTED", attempt 4/11
2014-08-20 12:37:00.436 4352 INFO nova.virt.xenapi.vm_utils 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] VBD 
OpaqueRef:cafa4730-c623-9c60-a3cb-734e40722e85 uplug failed with 
"DEVICE_DETACH_REJECTED", attempt 5/11
2014-08-20 12:37:01.483 4352 INFO nova.virt.xenapi.vm_utils 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] VBD 
OpaqueRef:cafa4730-c623-9c60-a3cb-734e40722e85 uplug failed with 
"DEVICE_DETACH_REJECTED", attempt 6/11
2014-08-20 12:37:02.527 4352 INFO nova.virt.xenapi.vm_utils 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970f3047988f37f9e01c107165] VBD 
OpaqueRef:cafa4730-c623-9c60-a3cb-734e40722e85 uplug failed with 
"DEVICE_DETACH_REJECTED", attempt 7/11
2014-08-20 12:37:03.573 4352 INFO nova.virt.xenapi.vm_utils 
[req-87219265-8e09-4323-9628-9cedd9478c30 9c97e81761234263822fe08c78faec7a 
8112ee970

[Yahoo-eng-team] [Bug 1359026] [NEW] Extra loop in populate_network_choices when launching instance

2014-08-19 Thread Liyingjun
Public bug reported:

There is an extra loop in populate_network_choices in launch instance workflow.
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L590-L592

** Affects: horizon
 Importance: Undecided
 Assignee: Liyingjun (liyingjun)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Liyingjun (liyingjun)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359026

Title:
  Extra loop in populate_network_choices when launching instance

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There is an extra loop in populate_network_choices in launch instance 
workflow.
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/workflows/create_instance.py#L590-L592

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1244457] Re: ServiceCatalogException: Invalid service catalog service: compute

2014-08-19 Thread Gary W. Smith
Tempest is asking horizon to perform a basic login test, and horizon is
throwing an exception because keystone cannot supply a url for nova.
Marking the horizon bug as invalid, since it is not a bug in the user
interface.

** Changed in: horizon
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1244457

Title:
  ServiceCatalogException: Invalid service catalog service: compute

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  On the following review - https://review.openstack.org/#/c/53712/

  We failed the tempest tests on the dashboard scenario tests for the pg 
version of the job: 
  2013-10-24 21:26:00.445 | 
==
  2013-10-24 21:26:00.445 | FAIL: 
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]
  2013-10-24 21:26:00.445 | 
tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario[dashboard]
  2013-10-24 21:26:00.445 | 
--
  2013-10-24 21:26:00.446 | _StringException: Empty attachments:
  2013-10-24 21:26:00.446 |   pythonlogging:''
  2013-10-24 21:26:00.446 |   stderr
  2013-10-24 21:26:00.446 |   stdout
  2013-10-24 21:26:00.446 | 
  2013-10-24 21:26:00.446 | Traceback (most recent call last):
  2013-10-24 21:26:00.446 |   File 
"tempest/scenario/test_dashboard_basic_ops.py", line 73, in test_basic_scenario
  2013-10-24 21:26:00.447 | self.user_login()
  2013-10-24 21:26:00.447 |   File 
"tempest/scenario/test_dashboard_basic_ops.py", line 64, in user_login
  2013-10-24 21:26:00.447 | self.opener.open(req, urllib.urlencode(params))
  2013-10-24 21:26:00.447 |   File "/usr/lib/python2.7/urllib2.py", line 406, 
in open
  2013-10-24 21:26:00.447 | response = meth(req, response)
  2013-10-24 21:26:00.447 |   File "/usr/lib/python2.7/urllib2.py", line 519, 
in http_response
  2013-10-24 21:26:00.447 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.448 |   File "/usr/lib/python2.7/urllib2.py", line 438, 
in error
  2013-10-24 21:26:00.448 | result = self._call_chain(*args)
  2013-10-24 21:26:00.448 |   File "/usr/lib/python2.7/urllib2.py", line 378, 
in _call_chain
  2013-10-24 21:26:00.448 | result = func(*args)
  2013-10-24 21:26:00.448 |   File "/usr/lib/python2.7/urllib2.py", line 625, 
in http_error_302
  2013-10-24 21:26:00.448 | return self.parent.open(new, 
timeout=req.timeout)
  2013-10-24 21:26:00.448 |   File "/usr/lib/python2.7/urllib2.py", line 406, 
in open
  2013-10-24 21:26:00.449 | response = meth(req, response)
  2013-10-24 21:26:00.449 |   File "/usr/lib/python2.7/urllib2.py", line 519, 
in http_response
  2013-10-24 21:26:00.449 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.449 |   File "/usr/lib/python2.7/urllib2.py", line 438, 
in error
  2013-10-24 21:26:00.449 | result = self._call_chain(*args)
  2013-10-24 21:26:00.449 |   File "/usr/lib/python2.7/urllib2.py", line 378, 
in _call_chain
  2013-10-24 21:26:00.449 | result = func(*args)
  2013-10-24 21:26:00.450 |   File "/usr/lib/python2.7/urllib2.py", line 625, 
in http_error_302
  2013-10-24 21:26:00.450 | return self.parent.open(new, 
timeout=req.timeout)
  2013-10-24 21:26:00.450 |   File "/usr/lib/python2.7/urllib2.py", line 406, 
in open
  2013-10-24 21:26:00.450 | response = meth(req, response)
  2013-10-24 21:26:00.450 |   File "/usr/lib/python2.7/urllib2.py", line 519, 
in http_response
  2013-10-24 21:26:00.450 | 'http', request, response, code, msg, hdrs)
  2013-10-24 21:26:00.450 |   File "/usr/lib/python2.7/urllib2.py", line 444, 
in error
  2013-10-24 21:26:00.451 | return self._call_chain(*args)
  2013-10-24 21:26:00.451 |   File "/usr/lib/python2.7/urllib2.py", line 378, 
in _call_chain
  2013-10-24 21:26:00.451 | result = func(*args)
  2013-10-24 21:26:00.451 |   File "/usr/lib/python2.7/urllib2.py", line 527, 
in http_error_default
  2013-10-24 21:26:00.451 | raise HTTPError(req.get_full_url(), code, msg, 
hdrs, fp)
  2013-10-24 21:26:00.451 | HTTPError: HTTP Error 500: INTERNAL SERVER ERROR

  The horizon logs have the following error info:

  [Thu Oct 24 21:18:43 2013] [error] Internal Server Error: /project/
  [Thu Oct 24 21:18:43 2013] [error] Traceback (most recent call last):
  [Thu Oct 24 21:18:43 2013] [error]   File 
"/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 
115, in get_response
  [Thu Oct 24 21:18:43 2013] [error] response = callback(request, 
*callback_args, **callback_kwargs)
  [Thu Oct 24 21:18:43 2013] [error]   File 
"/opt/stack/new/horizon/openstack_dashboard/wsgi/../../horizon/decorators.py", 
line 38, in dec
  [Thu

[Yahoo-eng-team] [Bug 1359017] [NEW] Returning unprecise error message when you create image with long name

2014-08-19 Thread Jin Liu
Public bug reported:

Image creation fails when the image name is > 255 chars, but the error
message does not indicate that name length is the issue.  Note that
double byte names work if they do not exceed the Glance length limit.

Upon clicking on Import button from GUI, console displayed below 500 internal 
error message:
==
Error
An error occurred while creating image Test to see if text counter can handle 
double byte.Test to see if text counter can handle double byte..

Explanation: The server encountered an unexpected error: 500 (Internal
Server Error).

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1359017

Title:
  Returning unprecise error message when you create image with long name

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Image creation fails when the image name is > 255 chars, but the error
  message does not indicate that name length is the issue.  Note that
  double byte names work if they do not exceed the Glance length limit.

  Upon clicking on Import button from GUI, console displayed below 500 internal 
error message:
  ==
  Error
  An error occurred while creating image Test to see if text counter can handle 
double byte.Test to see if text counter can handle double byte..

  Explanation: The server encountered an unexpected error: 500 (Internal
  Server Error).

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1359017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359002] [NEW] comments misspelled in type_filter

2014-08-19 Thread Freddie Zhang
Public bug reported:

the comments of nova.scheduler.filters.type_filter.py:

class TypeAffinityFilter(filters.BaseHostFilter):
...
def host_passes(self, host_state, filter_properties):
"""Dynamically limits hosts to one instance type

Return False if host has any instance types other then the requested
type. Return True if all instance types match or if host is empty.
"""
...


'other then' in the next-to-last line should be 'other than'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1359002

Title:
  comments misspelled in type_filter

Status in OpenStack Compute (Nova):
  New

Bug description:
  the comments of nova.scheduler.filters.type_filter.py:

  class TypeAffinityFilter(filters.BaseHostFilter):
  ...
  def host_passes(self, host_state, filter_properties):
  """Dynamically limits hosts to one instance type

  Return False if host has any instance types other then the requested
  type. Return True if all instance types match or if host is empty.
  """
  ...

  
  'other then' in the next-to-last line should be 'other than'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1359002/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359000] [NEW] Horizon orchestration stacks table repeats stack

2014-08-19 Thread Richard Jones
Public bug reported:

Creating a stack in Orchestration currently results in the Stacks table
repeating the single stack ad infinitum. The template used to create the
stack is:

# This is a hello world HOT template just defining a single compute instance
heat_template_version: 2013-05-23

description: >
  HOT template that just defines single compute instance.

parameters:
  flavor:
type: string
description: Instance type for the instance to be created
default: m1.nano
constraints:
  - allowed_values: [m1.nano, m1.micro, m1.tiny, m1.small, m1.large]
description: Value must be one of 'm1.nano', 'm1.micro', 'm1.tiny', 
'm1.small' or 'm1.large'
  image:
type: string
description: name of the image to use for the instance
default: cirros-0.3.2-x86_64-uec

resources:
  my_instance:
type: OS::Nova::Server
properties:
  image: { get_param: image }
  flavor: { get_param: flavor }

outputs:
  instance_ip:
description: The IP address of the deployed instance
value: { get_attr: [my_instance, first_address] }


Creating a stack with this template results in the following nova client output:

richard@devstack:~/devstack$ heat stack-list
+--++-+--+
| id   | stack_name | stack_status| 
creation_time|
+--++-+--+
| 74f1c88e-68e8-4864-9f90-f7f206bc8a38 | test2  | CREATE_COMPLETE | 
2014-08-20T01:54:04Z |
+--++-+--+

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1359000

Title:
  Horizon orchestration stacks table repeats stack

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Creating a stack in Orchestration currently results in the Stacks
  table repeating the single stack ad infinitum. The template used to
  create the stack is:

  # This is a hello world HOT template just defining a single compute instance
  heat_template_version: 2013-05-23

  description: >
    HOT template that just defines single compute instance.

  parameters:
    flavor:
  type: string
  description: Instance type for the instance to be created
  default: m1.nano
  constraints:
    - allowed_values: [m1.nano, m1.micro, m1.tiny, m1.small, m1.large]
  description: Value must be one of 'm1.nano', 'm1.micro', 'm1.tiny', 
'm1.small' or 'm1.large'
    image:
  type: string
  description: name of the image to use for the instance
  default: cirros-0.3.2-x86_64-uec

  resources:
    my_instance:
  type: OS::Nova::Server
  properties:
    image: { get_param: image }
    flavor: { get_param: flavor }

  outputs:
    instance_ip:
  description: The IP address of the deployed instance
  value: { get_attr: [my_instance, first_address] }

  
  Creating a stack with this template results in the following nova client 
output:

  richard@devstack:~/devstack$ heat stack-list
  
+--++-+--+
  | id   | stack_name | stack_status| 
creation_time|
  
+--++-+--+
  | 74f1c88e-68e8-4864-9f90-f7f206bc8a38 | test2  | CREATE_COMPLETE | 
2014-08-20T01:54:04Z |
  
+--++-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1359000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358998] [NEW] "No L3 agents can host the router..." traces for DVR

2014-08-19 Thread Armando Migliaccio
Public bug reported:

In a typical tempest run the Neutron Server's log is inundated of traces
at WARN level that say:

 No L3 agents can host the router 

An instance of this can be found here:

http://logs.openstack.org/91/114691/2/experimental/check-tempest-dsvm-
neutron-dvr/93b2ff0/logs/screen-q-svc.txt.gz?level=WARNING

Although under some circumstances this may be acceptable, we would need
to ensure that some of these do not mask scheduling errors.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

** Summary changed:

- No L3 agents can host the router traces for DVR
+ "No L3 agents can host the router..." traces for DVR

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358998

Title:
  "No L3 agents can host the router..." traces for DVR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In a typical tempest run the Neutron Server's log is inundated of
  traces at WARN level that say:

   No L3 agents can host the router 

  An instance of this can be found here:

  http://logs.openstack.org/91/114691/2/experimental/check-tempest-dsvm-
  neutron-dvr/93b2ff0/logs/screen-q-svc.txt.gz?level=WARNING

  Although under some circumstances this may be acceptable, we would
  need to ensure that some of these do not mask scheduling errors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358982] [NEW] Bigswitch plugin does not correctly build

2014-08-19 Thread Kris Lindgren
Public bug reported:

When running:  python setup.py install -O1 --skip-build --root {buildroot} the 
usr/etc/neutron/plugins/bigswitch directory that gets created contains the 
following files:
restproxy.ini
README.  

But according to git it should contain:

restproxy.ini
ssl/ca_certs/README
ssl/host_certs/README

This makes packaging this plugin problematic because you have files
leftover that you can not attribute to any where in the code and you
have directories that are in repository but are missing after the
install step.

This is directly caused by commit:
https://github.com/openstack/neutron/commit/7255e056092f034daaeb4246a812900645d46911

** Affects: neutron
 Importance: Undecided
 Assignee: Kris Lindgren (klindgren)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358982

Title:
  Bigswitch plugin does not correctly build

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When running:  python setup.py install -O1 --skip-build --root {buildroot} 
the usr/etc/neutron/plugins/bigswitch directory that gets created contains the 
following files:
  restproxy.ini
  README.  

  But according to git it should contain:

  restproxy.ini
  ssl/ca_certs/README
  ssl/host_certs/README

  This makes packaging this plugin problematic because you have files
  leftover that you can not attribute to any where in the code and you
  have directories that are in repository but are missing after the
  install step.

  This is directly caused by commit:
  
https://github.com/openstack/neutron/commit/7255e056092f034daaeb4246a812900645d46911

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350713] Re: Store configuration error in sheepdog

2014-08-19 Thread Tushar Kalra
I see these errors too with with the default glance-api.conf with
icehouse 2014.1.1. Glance works as expected and the errors are harmless.

Someone else reported this too:
https://ask.openstack.org/en/question/29511/glance-cache-prefetcher-
displays-errors-related-to-storage-backends-not-configured/

** Changed in: glance
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1350713

Title:
  Store configuration error in sheepdog

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  I have found following errors along with the "deprecated" warning 
  2014-07-30 21:05:14.971 9608 ERROR glance.store.sheepdog [-] Error in store 
configuration: [Errno 2] No such file or directory
  2014-07-30 21:05:14.972 9608 WARNING glance.store [-] Deprecated: 
glance.store.sheepdog.Store not found in `known_store`. Stores need to be 
explicitly enabled in the configuration file.

  in the gate-tempest-dsvm-large-ops tests.

  Full stacktrace here:

  http://logs.openstack.org/51/106751/6/check/gate-tempest-dsvm-large-
  ops/640c35c/logs/screen-g-api.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1350713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358911] [NEW] Cannot get csrftoken cookie from horizon.cookies

2014-08-19 Thread Justin Pomeroy
Public bug reported:

It seems the new way of working with cookies in javascript is to use the
horizon.cookies object which uses the angular ngCookies module.  I have
run into a problem trying to get the csrftoken cookie from this object
since it seems that ngCookies expects all cookies to be compatible with
JSON.parse.  This cookie is just a string value so when ngCookies tries
to parse it as JSON I get an error:

>> horizon.cookies.get('csrftoken')
<< SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON 
data

This specific error message is from Firefox but I get a similar one from
Chrome.

I only found one other place in horizon where horizon.cookies.get is
used and that is for the network topology 'ntp_draw_mode' cookie.  In
this case it works because the cookie seems to be first set using
horizon.cookies.put which ends up storing the cookie with encoded
quotes:

%22small%22

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1358911

Title:
  Cannot get csrftoken cookie from horizon.cookies

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  It seems the new way of working with cookies in javascript is to use
  the horizon.cookies object which uses the angular ngCookies module.  I
  have run into a problem trying to get the csrftoken cookie from this
  object since it seems that ngCookies expects all cookies to be
  compatible with JSON.parse.  This cookie is just a string value so
  when ngCookies tries to parse it as JSON I get an error:

  >> horizon.cookies.get('csrftoken')
  << SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the 
JSON data

  This specific error message is from Firefox but I get a similar one
  from Chrome.

  I only found one other place in horizon where horizon.cookies.get is
  used and that is for the network topology 'ntp_draw_mode' cookie.  In
  this case it works because the cookie seems to be first set using
  horizon.cookies.put which ends up storing the cookie with encoded
  quotes:

  %22small%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1358911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358908] [NEW] CENTOS 6.5 : starting keystone service

2014-08-19 Thread reachparagm
Public bug reported:

While starting keystone service on CENTOS 6.5 x86_64 system. The service
bails outs saying

service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: /usr/bin/python: No module named keystone.openstack.common
^
   [FAILED]

I have all rpms installed

# rpm -qa | grep openstack
openstack-selinux-0.1.3-2.el6ost.noarch
openstack-keystone-2014.1.1-1.el6.noarch
openstack-utils-2014.1-3.el6.noarch

# rpm -qa | grep keystone
python-keystoneclient-0.9.0-1.el6.noarch
python-keystone-2014.1.1-1.el6.noarch
openstack-keystone-2014.1.1-1.el6.noarch

I also performed an upgrade via
# pip install --upgrade python-keystoneclient  , which added 
"python_keystoneclient-0.10.1.dist-info" in site-packages.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1358908

Title:
  CENTOS 6.5 : starting keystone service

Status in OpenStack Identity (Keystone):
  New

Bug description:
  While starting keystone service on CENTOS 6.5 x86_64 system. The
  service bails outs saying

  service openstack-keystone restart
  Stopping keystone: [FAILED]
  Starting keystone: /usr/bin/python: No module named keystone.openstack.common
  ^
 [FAILED]

  I have all rpms installed

  # rpm -qa | grep openstack
  openstack-selinux-0.1.3-2.el6ost.noarch
  openstack-keystone-2014.1.1-1.el6.noarch
  openstack-utils-2014.1-3.el6.noarch

  # rpm -qa | grep keystone
  python-keystoneclient-0.9.0-1.el6.noarch
  python-keystone-2014.1.1-1.el6.noarch
  openstack-keystone-2014.1.1-1.el6.noarch

  I also performed an upgrade via
  # pip install --upgrade python-keystoneclient  , which added 
"python_keystoneclient-0.10.1.dist-info" in site-packages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1358908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358909] [NEW] Dropdowns in table filters should be consistent

2014-08-19 Thread Jeffrey Calcaterra
Public bug reported:

In the dropdowns for choosing the column filters in tables, all of the
items need to have "=" signs. Having a mixture is confusing.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1358909

Title:
  Dropdowns in table filters should be consistent

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the dropdowns for choosing the column filters in tables, all of the
  items need to have "=" signs. Having a mixture is confusing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1358909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358904] [NEW] Warning for image_locations.id being empty while running tests

2014-08-19 Thread nikhil komawar
Public bug reported:

sqlalchemy/sql/default_comparator.py:35: SAWarning: The IN-predicate on 
"image_locations.id" was invoked with an empty sequence. This results in a 
contradiction, which nonetheless can be expensive to evaluate.  Consider 
alternative strategies for improved performance.
  return o[0](self, self.expr, op, *(other + o[1:]), **kwargs)

warning is shown.

Not sure if this is a bug, need some confirmation.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1358904

Title:
  Warning for image_locations.id being empty while running tests

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  sqlalchemy/sql/default_comparator.py:35: SAWarning: The IN-predicate on 
"image_locations.id" was invoked with an empty sequence. This results in a 
contradiction, which nonetheless can be expensive to evaluate.  Consider 
alternative strategies for improved performance.
return o[0](self, self.expr, op, *(other + o[1:]), **kwargs)

  warning is shown.

  Not sure if this is a bug, need some confirmation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1358904/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358751] Re: neutron lb-healthmonitor-create argument "timeout" required and present. Neutron still complains. python-neutronclient==2.3.6

2014-08-19 Thread Eugene Nikanorov
*** This bug is a duplicate of bug 1353536 ***
https://bugs.launchpad.net/bugs/1353536

** No longer affects: neutron

** This bug has been marked a duplicate of bug 1353536
   lb-healthmonitor-create doesn't recognize the timeout parameter

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358751

Title:
  neutron lb-healthmonitor-create argument "timeout" required and
  present.  Neutron still complains.  python-neutronclient==2.3.6

Status in Python client library for Neutron:
  New

Bug description:
  neutron lb-healthmonitor-create argument "timeout" required and
  present.  Neutron complains anyway.

  Bug exists in:
  python-neutronclient==2.3.6

  Bug does not exist in:
  python-neutronclient==2.3.5

  Log follows:
  (openstack)OSTML0204844:home$ neutron lb-healthmonitor-create --delay 6 
--max-retries 3 --timeout 5 --type TCP
  usage: neutron lb-healthmonitor-create [-h] [-f {shell,table,value}]
     [-c COLUMN] [--max-width ]
     [--variable VARIABLE] [--prefix PREFIX]
     [--request-format {json,xml}]
     [--tenant-id TENANT_ID]
     [--admin-state-down]
     [--expected-codes EXPECTED_CODES]
     [--http-method HTTP_METHOD]
     [--url-path URL_PATH] --delay DELAY
     --max-retries MAX_RETRIES --timeout
     TIMEOUT --type {PING,TCP,HTTP,HTTPS}
  neutron lb-healthmonitor-create: error: argument --timeout is required
  (openstack)OSTML0204844:home$ neutron --version
  2.3.6
  (openstackdev)OSTML0204844:home$ pip install python-neutronclient==2.3.5
  Successfully installed python-neutronclient cliff simplejson cmd2 pyparsing
  (openstackdev)OSTML0204844:home$ neutron net-list
  
+--+---++
  | id   | name  | subnets  
  |
  
+--+---++
  | 871aceeb-720a-46b2-97fa-cdea90d0c963 | ext_net   | 
81041d54-7806-4d78-967e-36b47f8177a5 10.30.40.0/24 |
  | af9ed28b-acee-4b95-97d3-45a02322bdbf | ext-net2  | 
2f488c1a-6afd-4b0b-8006-cc1b2c3aaf2b 10.30.80.0/24 |
  | b1cd3520-e086-40ce-b524-c8da64320c4e | load-def-net1-125 | 
04d7fa44-c04e-46d7-811e-933df5477bb4 10.125.1.0/24 |
  
+--+---++
  (openstackdev)OSTML0204844:home$ neutron lb-healthmonitor-create --delay 6 
--max-retries 3 --timeout 5 --type TCP
  Created a new health_monitor:
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | delay  | 6|
  | id | 46d4808c-7601-4928-8a55-e040a48a32e7 |
  | max_retries| 3|
  | pools  |  |
  | tenant_id  | ab98e98fc0474508b8f4a44ae05dc118 |
  | timeout| 5|
  | type   | TCP  |
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-neutronclient/+bug/1358751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358751] Re: neutron lb-healthmonitor-create argument "timeout" required and present. Neutron still complains. python-neutronclient==2.3.6

2014-08-19 Thread Sam Betts
** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358751

Title:
  neutron lb-healthmonitor-create argument "timeout" required and
  present.  Neutron still complains.  python-neutronclient==2.3.6

Status in OpenStack Neutron (virtual network service):
  New
Status in Python client library for Neutron:
  New

Bug description:
  neutron lb-healthmonitor-create argument "timeout" required and
  present.  Neutron complains anyway.

  Bug exists in:
  python-neutronclient==2.3.6

  Bug does not exist in:
  python-neutronclient==2.3.5

  Log follows:
  (openstack)OSTML0204844:home$ neutron lb-healthmonitor-create --delay 6 
--max-retries 3 --timeout 5 --type TCP
  usage: neutron lb-healthmonitor-create [-h] [-f {shell,table,value}]
     [-c COLUMN] [--max-width ]
     [--variable VARIABLE] [--prefix PREFIX]
     [--request-format {json,xml}]
     [--tenant-id TENANT_ID]
     [--admin-state-down]
     [--expected-codes EXPECTED_CODES]
     [--http-method HTTP_METHOD]
     [--url-path URL_PATH] --delay DELAY
     --max-retries MAX_RETRIES --timeout
     TIMEOUT --type {PING,TCP,HTTP,HTTPS}
  neutron lb-healthmonitor-create: error: argument --timeout is required
  (openstack)OSTML0204844:home$ neutron --version
  2.3.6
  (openstackdev)OSTML0204844:home$ pip install python-neutronclient==2.3.5
  Successfully installed python-neutronclient cliff simplejson cmd2 pyparsing
  (openstackdev)OSTML0204844:home$ neutron net-list
  
+--+---++
  | id   | name  | subnets  
  |
  
+--+---++
  | 871aceeb-720a-46b2-97fa-cdea90d0c963 | ext_net   | 
81041d54-7806-4d78-967e-36b47f8177a5 10.30.40.0/24 |
  | af9ed28b-acee-4b95-97d3-45a02322bdbf | ext-net2  | 
2f488c1a-6afd-4b0b-8006-cc1b2c3aaf2b 10.30.80.0/24 |
  | b1cd3520-e086-40ce-b524-c8da64320c4e | load-def-net1-125 | 
04d7fa44-c04e-46d7-811e-933df5477bb4 10.125.1.0/24 |
  
+--+---++
  (openstackdev)OSTML0204844:home$ neutron lb-healthmonitor-create --delay 6 
--max-retries 3 --timeout 5 --type TCP
  Created a new health_monitor:
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | delay  | 6|
  | id | 46d4808c-7601-4928-8a55-e040a48a32e7 |
  | max_retries| 3|
  | pools  |  |
  | tenant_id  | ab98e98fc0474508b8f4a44ae05dc118 |
  | timeout| 5|
  | type   | TCP  |
  ++--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358888] [NEW] Task ID should be logged when changing task status

2014-08-19 Thread Eddie Sheffield
Public bug reported:

Currently messages are logged when a task status changes indicating
success or failure of the change. However no task ID is logged making it
difficult to track which task is changing. The task ID should be added
to these log messages.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/135

Title:
  Task ID should be logged when changing task status

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Currently messages are logged when a task status changes indicating
  success or failure of the change. However no task ID is logged making
  it difficult to track which task is changing. The task ID should be
  added to these log messages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358881] [NEW] jjsonschema 2.3.0 -> 2.4.0 upgrade breaking nova.tests.test_api_validation tests

2014-08-19 Thread Corey Wright
Public bug reported:

the following two failures appeared after upgrading jsonschema to 2.4.0.
downgrading to 2.3.0 returned the tests to passing.

==
FAIL: 
nova.tests.test_api_validation.TcpUdpPortTestCase.test_validate_tcp_udp_port_fails
--
Traceback (most recent call last):
_StringException: Empty attachments:
  pythonlogging:''
  stderr
  stdout

Traceback (most recent call last):
  File "/home/dev/Desktop/nova-test/nova/tests/test_api_validation.py", line 
602, in test_validate_tcp_udp_port_fails
expected_detail=detail)
  File "/home/dev/Desktop/nova-test/nova/tests/test_api_validation.py", line 
31, in check_validation_error
self.assertEqual(ex.kwargs, expected_kwargs)
  File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 321, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/dev/Desktop/nova-test/.venv/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 406, in assertThat
raise mismatch_error
MismatchError: !=:
reference = {'code': 400,
 'detail': u'Invalid input for field/attribute foo. Value: 65536. 65536 is 
greater than the maximum of 65535'}
actual= {'code': 400,
 'detail': 'Invalid input for field/attribute foo. Value: 65536. 65536.0 is 
greater than the maximum of 65535'}


==
FAIL: 
nova.tests.test_api_validation.IntegerRangeTestCase.test_validate_integer_range_fails
--
Traceback (most recent call last):
_StringException: Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
INFO [migrate.versioning.api] 215 -> 216... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 216 -> 217... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 217 -> 218... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 218 -> 219... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 219 -> 220... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 220 -> 221... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 221 -> 222... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 222 -> 223... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 223 -> 224... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 224 -> 225... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 225 -> 226... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 226 -> 227... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 227 -> 228... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 228 -> 229... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 229 -> 230... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 230 -> 231... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 231 -> 232... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 232 -> 233... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 233 -> 234... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 234 -> 235... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 235 -> 236... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 236 -> 237... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 237 -> 238... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 238 -> 239... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 239 -> 240... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 240 -> 241... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 241 -> 242... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 242 -> 243... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 243 -> 244... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 244 -> 245... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 245 -> 246... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 246 -> 247... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 247 -> 248... 
INFO [248_add_expire_reservations_index] Skipped adding 
reservations_deleted_expire_idx because an equivalent index already exists.
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 248 -> 249... 
INFO [migrate.versioning.api] done
INFO [migrate.versioning.api] 249 -> 250... 
INFO [migrate.versioning.api] done
}}}

Traceback (most recent call last):
  File "/home/dev/Desktop/nova-test/nova/tests/test_api_validation.py", line 
361, in test_validate_integer_range_fails
expected_detail=detail)
  File "/home/dev/Desktop/nova-test/nova/tests/test_api_validation.py", line 

[Yahoo-eng-team] [Bug 1289546] Re: test_create_backup times out while waiting for the image to be active

2014-08-19 Thread Matt Riedemann
There aren't any more hits on this recently so just going to close it
out, we can re-open again later if it shows up.

** No longer affects: nova

** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289546

Title:
  test_create_backup times out while waiting for the image to be active

Status in Tempest:
  Fix Committed

Bug description:
  Seems that this is unreported but there are other bugs for the same
  test case having problems.

  http://logs.openstack.org/73/76373/9/check/check-tempest-dsvm-
  full/748db9b/console.html

  2014-03-07 17:03:15.693 | Traceback (most recent call last):
  2014-03-07 17:03:15.693 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 265, in 
test_create_backup
  2014-03-07 17:03:15.693 | 
self.os.image_client.wait_for_image_status(image2_id, 'active')
  2014-03-07 17:03:15.693 |   File 
"tempest/services/image/v1/json/image_client.py", line 295, in 
wait_for_image_status
  2014-03-07 17:03:15.693 | raise exceptions.TimeoutException(message)
  2014-03-07 17:03:15.694 | TimeoutException: Request timed out
  2014-03-07 17:03:15.694 | Details: Time Limit Exceeded! (196s)while waiting 
for active, but we got queued.

  Looks like the error message started showing up around 3/2:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGltZSBMaW1pdCBFeGNlZWRlZCFcIiBBTkQgbWVzc2FnZTpcIndoaWxlIHdhaXRpbmcgZm9yIGFjdGl2ZSwgYnV0IHdlIGdvdCBxdWV1ZWQuXCIgQU5EIGZpbGVuYW1lOlwiY29uc29sZS5odG1sXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTQyMjA1ODY0NzZ9

  Possibly related bugs:

  bug 1288038 - Invalid backup: Backup status must be available or error
  bug 1280937 - test_create_backup times out on waiting for resource delete
  bug 1267326 - test_create_backup fails due to unexpected image number

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1289546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277061] Re: db migration does not create tenant_id index of quotas table

2014-08-19 Thread Akihiro Motoki
The issue was already fixed as a part of db migration healing work.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277061

Title:
  db migration does not create tenant_id index of quotas table

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  In neutron/db/quota_db, Quota table has an index for tenant_id,
  but db migration script does not create it.

  This affects all plugins which create quotas table with db migration.

  For example, quotas for nec plugin after db migration (upgrade head)
  is:

  $ mysql neutron_nec -e 'show create table quotas;'

  | quotas | CREATE TABLE `quotas` (
`id` varchar(36) NOT NULL,
`tenant_id` varchar(255) DEFAULT NULL,
`resource` varchar(255) DEFAULT NULL,
`limit` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |

  neutron/db/quota_db.py:

  class Quota(model_base.BASEV2, models_v2.HasId):
  """Represent a single quota override for a tenant.

  If there is no row for a given tenant id and resource, then the
  default for the quota class is used.
  """
  tenant_id = sa.Column(sa.String(255), index=True)
  resource = sa.Column(sa.String(255))
  limit = sa.Column(sa.Integer)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1277061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297016] Re: flake8 configuration does not ignore rope artifacts

2014-08-19 Thread Gary W. Smith
** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
 Assignee: (unassigned) => Gary W. Smith (gary-w-smith)

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1297016

Title:
  flake8 configuration does not ignore rope artifacts

Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Neutron (virtual network service):
  Fix Released

Bug description:
  The tox.ini configuration for 'exclude' overrides any global
  definition, so excluding .ropeproject in tox.ini is the only way to
  avoid false negatives from confusing users of rope (source
  navigation/refactoring tool for python).

  Reference: http://rope.sourceforge.net/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1297016/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358856] [NEW] height of textarea is too large

2014-08-19 Thread Akihiro Motoki
Public bug reported:

In Juno master branch, the height of textarea is 10 lines and it seems too large
when considering usecases of text areas in Horizon forms.
Especially in case where a forms has multiple textareas, e.g. "Create Network" 
workflow "Subnet Details" tab.

3~5 lines seems good to me.

** Affects: horizon
 Importance: Medium
 Status: New


** Tags: ux

** Attachment added: "スクリーンショット 2014-08-20 2.12.25.png"
   
https://bugs.launchpad.net/bugs/1358856/+attachment/4181770/+files/%E3%82%B9%E3%82%AF%E3%83%AA%E3%83%BC%E3%83%B3%E3%82%B7%E3%83%A7%E3%83%83%E3%83%88%202014-08-20%202.12.25.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1358856

Title:
  height of textarea is too large

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Juno master branch, the height of textarea is 10 lines and it seems too 
large
  when considering usecases of text areas in Horizon forms.
  Especially in case where a forms has multiple textareas, e.g. "Create 
Network" workflow "Subnet Details" tab.

  3~5 lines seems good to me.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1358856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358818] [NEW] extra_specs string check breaks backward compatibility

2014-08-19 Thread Matthew Edmonds
Public bug reported:

We've found that while with Icehouse we were able to specify extra_specs
values as ints or floats, in Juno the command fails unless we make these
values strings by quoting them. This breaks backward compatibility.

compare Icehouse:

curl -k -i -X POST 
http://127.0.0.1:8774/v2/982607a6a1134514abac252fc25384ad/flavors/1/os-extra_specs
 -H "X-Auth-Token: *" -H "Accept: application/json" -H "Content-Type: 
application/json" -d 
'{"extra_specs":{"powervm:proc_units":"0.2","powervm:processor_compatibility":"default","powervm:min_proc_units":"0.1","powervm:max_proc_units":"0.5","powervm:min_vcpu":1,"powervm:max_vcpu":5,"powervm:min_mem":1024,"powervm:max_mem":4096,"powervm:availability_priority":127,"powervm:dedicated_proc":"false","powervm:uncapped":"true","powervm:shared_weight":128}}';
 echo
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 385
X-Compute-Request-Id: req-9132922d-c703-4573-9822-9ca7a6bf7b0d
Date: Thu, 14 Aug 2014 18:25:02 GMT

{"extra_specs": {"powervm:processor_compatibility": "default",
"powervm:max_proc_units": "0.5", "powervm:shared_weight": 128,
"powervm:min_mem": 1024, "powervm:max_mem": 4096, "powervm:uncapped":
"true", "powervm:proc_units": "0.2", "powervm:dedicated_proc": "false",
"powervm:max_vcpu": 5, "powervm:availability_priority": 127,
"powervm:min_proc_units": "0.1", "powervm:min_vcpu": 1}}


to Juno:

curl -k -i -X POST 
http://127.0.0.1:8774/v2/be2ffade1e0b4bed83619e00482317d1/flavors/1/os-extra_specs
 -H "X-Auth-Token: *" -H "Accept: application/json" -H "Content-Type: 
application/json" -d 
'{"extra_specs":{"powervm:proc_units":"0.2","powervm:processor_compatibility":"default","powervm:min_proc_units":"0.1","powervm:max_proc_units":"0.5","powervm:min_vcpu":1,"powervm:max_vcpu":5,"powervm:min_mem":1024,"powervm:max_mem":4096,"powervm:availability_priority":127,"powervm:dedicated_proc":"false","powervm:uncapped":"true","powervm:shared_weight":128}}';
 echo
HTTP/1.1 400 Bad Request
Content-Length: 88
Content-Type: application/json; charset=UTF-8
Date: Thu, 14 Aug 2014 18:25:46 GMT

{"badRequest": {"message": "extra_specs value is not a string or
unicode", "code": 400}}


if I modify the data sent so that everything is a string, it will work for Juno:

curl -k -i -X POST 
http://127.0.0.1:8774/v2/be2ffade1e0b4bed83619e00482317d1/flavors/1/os-extra_specs
 -H "X-Auth-Token: *" -H "Accept: application/json" -H "Content-Type: 
application/json" -d 
'{"extra_specs":{"powervm:proc_units":"0.2","powervm:processor_compatibility":"default","powervm:min_proc_units":"0.1","powervm:max_proc_units":"0.5","powervm:min_vcpu":"1","powervm:max_vcpu":"5","powervm:min_mem":"1024","powervm:max_mem":"4096","powervm:availability_priority":"127","powervm:dedicated_proc":"false","powervm:uncapped":"true","powervm:shared_weight":"128"}}';
 echo
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 397
Date: Thu, 14 Aug 2014 18:26:27 GMT

{"extra_specs": {"powervm:processor_compatibility": "default",
"powervm:max_proc_units": "0.5", "powervm:shared_weight": "128",
"powervm:min_mem": "1024", "powervm:max_mem": "4096",
"powervm:uncapped": "true", "powervm:proc_units": "0.2",
"powervm:dedicated_proc": "false", "powervm:max_vcpu": "5",
"powervm:availability_priority": "127", "powervm:min_proc_units": "0.1",
"powervm:min_vcpu": "1"}}


The API change guidelines (https://wiki.openstack.org/wiki/APIChangeGuidelines) 
describe as "generally not acceptable": "A change such that a request which was 
successful before now results in an error response (unless the success reported 
previously was hiding an existing error condition)". That is exactly what this 
is.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358818

Title:
  extra_specs string check breaks backward compatibility

Status in OpenStack Compute (Nova):
  New

Bug description:
  We've found that while with Icehouse we were able to specify
  extra_specs values as ints or floats, in Juno the command fails unless
  we make these values strings by quoting them. This breaks backward
  compatibility.

  compare Icehouse:

  curl -k -i -X POST 
http://127.0.0.1:8774/v2/982607a6a1134514abac252fc25384ad/flavors/1/os-extra_specs
 -H "X-Auth-Token: *" -H "Accept: application/json" -H "Content-Type: 
application/json" -d 
'{"extra_specs":{"powervm:proc_units":"0.2","powervm:processor_compatibility":"default","powervm:min_proc_units":"0.1","powervm:max_proc_units":"0.5","powervm:min_vcpu":1,"powervm:max_vcpu":5,"powervm:min_mem":1024,"powervm:max_mem":4096,"powervm:availability_priority":127,"powervm:dedicated_proc":"false","powervm:uncapped":"true","powervm:shared_weight":128}}';
 echo
  HTTP/1.1 200 OK
  Content-Type: application/json
  Content-Length: 385
  X-Compute-Request-Id: req-9132922d-c703-4573-9822-9ca7a6

[Yahoo-eng-team] [Bug 1306835] Re: V3 list users filter by email address throws exception

2014-08-19 Thread Brant Knudson
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1306835

Title:
  V3 list users  filter by email address throws exception

Status in OpenStack Identity (Keystone):
  Fix Released
Status in OpenStack Manuals:
  New

Bug description:
  V3 list_user filter by email throws excpetion. There is no such
  attribute email.

  keystone.common.wsgi): 2014-04-11 23:09:00,422 ERROR type object 'User' has 
no attribute 'email'
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 206, 
in __call__
  result = method(context, **params)
File "/usr/lib/python2.7/dist-packages/keystone/common/controller.py", line 
183, in wrapper
  return f(self, context, filters, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/controllers.py", 
line 284, in list_users
  hints=hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/manager.py", line 
52, in wrapper
  return f(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 
189, in wrapper
  return f(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 
328, in list_users
  ref_list = driver.list_users(hints or driver_hints.Hints())
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
227, in wrapper
  return f(self, hints, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/keystone/identity/backends/sql.py", 
line 132, in list_users
  user_refs = sql.filter_limit_query(User, query, hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
374, in filter_limit_query
  query = _filter(model, query, hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
326, in _filter
  filter_dict = exact_filter(model, filter_, filter_dict, hints)
File "/usr/lib/python2.7/dist-packages/keystone/common/sql/core.py", line 
312, in exact_filter
  if isinstance(getattr(model, key).property.columns[0].type,
  AttributeError: type object 'User' has no attribute 'email'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1306835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358805] [NEW] Incorrect API in the add Tenant Access to private flavor action

2014-08-19 Thread KaiLin
Public bug reported:

when I give a specified tenant access to the specified private flavor, I
use it like this :

POST /v2/​{tenant_id}​/flavors/​{flavor_id}​/action
{
"addTenantAccess": {
"tenant": "fake_tenant"
}
}
tenant: The name of the tenant to which to give access.

The response is:
{
"flavor_access": [
{
"flavor_id": "10",
#here is "tenant_id"
"tenant_id": "fake_tenant"
},
{
"flavor_id": "10",
"tenant_id": "openstack"
}
]
}

when I use the private flavor to create VM in the specified tenant,it
failed. But if I add the tenant access by using the tenant id,it can
create VM successfully in the specified tenant .

POST /v2/​{tenant_id}​/flavors/​{flavor_id}​/action
{
"addTenantAccess": {
"tenant": "{tenant_id}"
}
}

I check the code, It also uses the "tenant" information as the "project id".
So we should change "tenant" to "tenant_id" in the API of  the add Tenant 
Access to private flavor action.

** Affects: nova
 Importance: Undecided
 Assignee: KaiLin (linkai3)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => KaiLin (linkai3)

** Description changed:

  when I give a specified tenant access to the specified private flavor, I
  use it like this :
  
  POST /v2/​{tenant_id}​/flavors/​{flavor_id}​/action
  {
- "addTenantAccess": {
- "tenant": "fake_tenant"
- }
+ "addTenantAccess": {
+ "tenant": "fake_tenant"
+ }
  }
  tenant: The name of the tenant to which to give access.
  
  The response is:
  {
- "flavor_access": [
- {
- "flavor_id": "10",
- #here is "tenant_id"
- "tenant_id": "fake_tenant"
- },
- {
- "flavor_id": "10",
- "tenant_id": "openstack"
- }
- ]
+ "flavor_access": [
+ {
+ "flavor_id": "10",
+ #here is "tenant_id"
+ "tenant_id": "fake_tenant"
+ },
+ {
+ "flavor_id": "10",
+ "tenant_id": "openstack"
+ }
+ ]
  }
  
  when I use the private flavor to create VM in the specified tenant,it
  failed. But if I add the tenant access by using the tenant id,it can
  create VM successfully in the specified tenant .
  
  POST /v2/​{tenant_id}​/flavors/​{flavor_id}​/action
  {
- "addTenantAccess": {
- "tenant": "{tenant_id}"
- }
+ "addTenantAccess": {
+ "tenant": "{tenant_id}"
+ }
  }
  
- I check the code, It also use the "tenant" information as the "project id".
+ I check the code, It also uses the "tenant" information as the "project id".
  So we should change "tenant" to "tenant_id" in the API of  the add Tenant 
Access to private flavor action.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358805

Title:
  Incorrect API in the add Tenant Access to private flavor action

Status in OpenStack Compute (Nova):
  New

Bug description:
  when I give a specified tenant access to the specified private flavor,
  I use it like this :

  POST /v2/​{tenant_id}​/flavors/​{flavor_id}​/action
  {
  "addTenantAccess": {
  "tenant": "fake_tenant"
  }
  }
  tenant: The name of the tenant to which to give access.

  The response is:
  {
  "flavor_access": [
  {
  "flavor_id": "10",
  #here is "tenant_id"
  "tenant_id": "fake_tenant"
  },
  {
  "flavor_id": "10",
  "tenant_id": "openstack"
  }
  ]
  }

  when I use the private flavor to create VM in the specified tenant,it
  failed. But if I add the tenant access by using the tenant id,it can
  create VM successfully in the specified tenant .

  POST /v2/​{tenant_id}​/flavors/​{flavor_id}​/action
  {
  "addTenantAccess": {
  "tenant": "{tenant_id}"
  }
  }

  I check the code, It also uses the "tenant" information as the "project id".
  So we should change "tenant" to "tenant_id" in the API of  the add Tenant 
Access to private flavor action.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358796] [NEW] Min Disk and Min RAM fields should not allow negative values

2014-08-19 Thread Bradley Jones
Public bug reported:

Seen in Create An Image modal

** Affects: horizon
 Importance: Undecided
 Assignee: Bradley Jones (bradjones)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Bradley Jones (bradjones)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1358796

Title:
  Min Disk and Min RAM fields should not allow negative values

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Seen in Create An Image modal

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1358796/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358795] [NEW] instance.create.end notification may not be sent if the instance is deleted during boot

2014-08-19 Thread Andrew Laski
Public bug reported:

If an instance is deleted at a point during the virt driver.spawn()
method that doesn't raise an exception, or while the power state is
being updated, then the instance.save() which sets the final power
state, vm_state, task_state, and launched_at will raise InstanceNotFound
or UnexpectedDeletingTaskStateError and cause the final create.end
notification to be skipped.  This could have implications for
billing/usage in a deployment.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358795

Title:
  instance.create.end notification may not be sent if the instance is
  deleted during boot

Status in OpenStack Compute (Nova):
  New

Bug description:
  If an instance is deleted at a point during the virt driver.spawn()
  method that doesn't raise an exception, or while the power state is
  being updated, then the instance.save() which sets the final power
  state, vm_state, task_state, and launched_at will raise
  InstanceNotFound or UnexpectedDeletingTaskStateError and cause the
  final create.end notification to be skipped.  This could have
  implications for billing/usage in a deployment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358772] [NEW] get_available_datastores - possible issue with datastore accessibility

2014-08-19 Thread Davanum Srinivas (DIMS)
Public bug reported:

Vipin found this issue during a code 'port' from nova to oslo.vmware in review 
114551:
https://review.openstack.org/#/c/114551/14/oslo/vmware/selector.py,unified

https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/ds_util.py#L303

Quote from vipin:

"I think there is a problem here.

Assume that cluster_mor is None and host_mor is h1. If a datastore d1 is
attached to hosts h1 and h2 where it is accessible only to h2 and not
h1, summary.accessible will be True even though it is not accessible to
h1.

We should use HostMountInfo.accessible in this case."

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358772

Title:
  get_available_datastores - possible issue with datastore accessibility

Status in OpenStack Compute (Nova):
  New

Bug description:
  Vipin found this issue during a code 'port' from nova to oslo.vmware in 
review 114551:
  https://review.openstack.org/#/c/114551/14/oslo/vmware/selector.py,unified

  
https://github.com/openstack/nova/blob/master/nova/virt/vmwareapi/ds_util.py#L303

  Quote from vipin:

  "I think there is a problem here.

  Assume that cluster_mor is None and host_mor is h1. If a datastore d1
  is attached to hosts h1 and h2 where it is accessible only to h2 and
  not h1, summary.accessible will be True even though it is not
  accessible to h1.

  We should use HostMountInfo.accessible in this case."

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304181] Re: neutron should validate gateway_ip is in subnet

2014-08-19 Thread Thierry Carrez
Backport reviews claim there is a DoS here to justify bypassing stable
branch rules. Adding security to investigate that

** Information type changed from Public to Public Security

** Also affects: ossa
   Importance: Undecided
   Status: New

** Also affects: neutron/havana
   Importance: Undecided
   Status: New

** Also affects: neutron/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1304181

Title:
  neutron should validate gateway_ip is in subnet

Status in OpenStack Neutron (virtual network service):
  Fix Committed
Status in neutron havana series:
  New
Status in neutron icehouse series:
  New
Status in OpenStack Security Advisories:
  New

Bug description:
  I don't believe this is actually a valid network configuration:

  arosen@arosen-MacBookPro:~/devstack$ neutron subnet-show  
be0a602b-ea52-4b13-8003-207be20187da
  +--++
  | Field| Value  |
  +--++
  | allocation_pools | {"start": "10.11.12.1", "end": "10.11.12.254"} |
  | cidr | 10.11.12.0/24  |
  | dns_nameservers  ||
  | enable_dhcp  | True   |
  | gateway_ip   | 10.0.0.1   |
  | host_routes  ||
  | id   | be0a602b-ea52-4b13-8003-207be20187da   |
  | ip_version   | 4  |
  | name | private-subnet |
  | network_id   | 53ec3eac-9404-41d4-a899-da4f32045abd   |
  | tenant_id| f2d9c1726aa940d3bd5a8ee529ea2480   |
  +--++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1304181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358765] [NEW] Routes should be available from interfaces.template

2014-08-19 Thread Mathieu Gagné
Public bug reported:

Routes should be made available when generating the interfaces file from
interfaces.template.

People overriding interfaces.template might want to inject routes too
but they can't. The routes are not injected in the template engine. We
should make them available for such use cases.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358765

Title:
  Routes should be available from interfaces.template

Status in OpenStack Compute (Nova):
  New

Bug description:
  Routes should be made available when generating the interfaces file
  from interfaces.template.

  People overriding interfaces.template might want to inject routes too
  but they can't. The routes are not injected in the template engine. We
  should make them available for such use cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358765/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358751] [NEW] neutron lb-healthmonitor-create argument "timeout" required and present. Neutron still complains. python-neutronclient==2.3.6

2014-08-19 Thread Max Cameron
Public bug reported:

neutron lb-healthmonitor-create argument "timeout" required and present.
Neutron complains anyway.

Bug exists in:
python-neutronclient==2.3.6

Bug does not exist in:
python-neutronclient==2.3.5

Log follows:
(openstack)OSTML0204844:home$ neutron lb-healthmonitor-create --delay 6 
--max-retries 3 --timeout 5 --type TCP
usage: neutron lb-healthmonitor-create [-h] [-f {shell,table,value}]
   [-c COLUMN] [--max-width ]
   [--variable VARIABLE] [--prefix PREFIX]
   [--request-format {json,xml}]
   [--tenant-id TENANT_ID]
   [--admin-state-down]
   [--expected-codes EXPECTED_CODES]
   [--http-method HTTP_METHOD]
   [--url-path URL_PATH] --delay DELAY
   --max-retries MAX_RETRIES --timeout
   TIMEOUT --type {PING,TCP,HTTP,HTTPS}
neutron lb-healthmonitor-create: error: argument --timeout is required
(openstack)OSTML0204844:home$ neutron --version
2.3.6
(openstackdev)OSTML0204844:home$ pip install python-neutronclient==2.3.5
Successfully installed python-neutronclient cliff simplejson cmd2 pyparsing
(openstackdev)OSTML0204844:home$ neutron net-list
+--+---++
| id   | name  | subnets
|
+--+---++
| 871aceeb-720a-46b2-97fa-cdea90d0c963 | ext_net   | 
81041d54-7806-4d78-967e-36b47f8177a5 10.30.40.0/24 |
| af9ed28b-acee-4b95-97d3-45a02322bdbf | ext-net2  | 
2f488c1a-6afd-4b0b-8006-cc1b2c3aaf2b 10.30.80.0/24 |
| b1cd3520-e086-40ce-b524-c8da64320c4e | load-def-net1-125 | 
04d7fa44-c04e-46d7-811e-933df5477bb4 10.125.1.0/24 |
+--+---++
(openstackdev)OSTML0204844:home$ neutron lb-healthmonitor-create --delay 6 
--max-retries 3 --timeout 5 --type TCP
Created a new health_monitor:
++--+
| Field  | Value|
++--+
| admin_state_up | True |
| delay  | 6|
| id | 46d4808c-7601-4928-8a55-e040a48a32e7 |
| max_retries| 3|
| pools  |  |
| tenant_id  | ab98e98fc0474508b8f4a44ae05dc118 |
| timeout| 5|
| type   | TCP  |
++--+

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  neutron lb-healthmonitor-create argument "timeout" required and present.
  Neutron complains anyway.
  
  Bug exists in:
- python-neutronclient==2.5.6
+ python-neutronclient==2.3.6
  
- Bug does not exist in: 
+ Bug does not exist in:
  python-neutronclient==2.3.5
  
  Log follows:
  (openstack)OSTML0204844:home$ neutron lb-healthmonitor-create --delay 6 
--max-retries 3 --timeout 5 --type TCP
  usage: neutron lb-healthmonitor-create [-h] [-f {shell,table,value}]
-[-c COLUMN] [--max-width ]
-[--variable VARIABLE] [--prefix PREFIX]
-[--request-format {json,xml}]
-[--tenant-id TENANT_ID]
-[--admin-state-down]
-[--expected-codes EXPECTED_CODES]
-[--http-method HTTP_METHOD]
-[--url-path URL_PATH] --delay DELAY
---max-retries MAX_RETRIES --timeout
-TIMEOUT --type {PING,TCP,HTTP,HTTPS}
+    [-c COLUMN] [--max-width ]
+    [--variable VARIABLE] [--prefix PREFIX]
+    [--request-format {json,xml}]
+    [--tenant-id TENANT_ID]
+    [--admin-state-down]
+    [--expected-codes EXPECTED_CODES]
+    [--http-method HTTP_METHOD]
+    [--url-path URL_PATH] --delay DELAY
+    --max-retries MAX_RE

[Yahoo-eng-team] [Bug 1309753] Re: VMware: datastore_regex not used while sending disk stats

2014-08-19 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1309753

Title:
  VMware: datastore_regex not used while sending disk stats

Status in OpenStack Compute (Nova):
  In Progress
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  
  VMware VCDriver uses datastore_regex to match datastores (disk abstraction) 
associated with a compute host which can be used for provisioning instances. 
But it does not use datastore_regex while reporting disk stats. As a result, 
when this option is enabled, resource tacker may see different disk usage than 
what's computed while spawning the instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1309753/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358731] [NEW] SLAAC IP address is not checked for duplicates

2014-08-19 Thread Sergey Shnaidman
Public bug reported:

SLAAC IPv6 address should be checked for duplicates before applying.

Create network and IPv6 subnet.
$ neutron net-create net123
$ neutron subnet-create net123 --name=sub1 --ip-version 6 --ipv6-ra-mode slaac 
--ipv6-address-mode slaac 2014::/64

Then create port with fixed IP address that matches SLAAC address for MAC 
"11:22:33:44:55:66":  2014::1322:33ff:fe44:5566
$ neutron port-create net123 --fixed-ip 
subnet_id=1d6fcc3d-0c55-4bdf-9e7f-5173df8d5fda,ip_address=2014::1322:33ff:fe44:5566

Now create port with MAC "11:22:33:44:55:66" which should get the same addresss 
we set before:
$ neutron port-create net123 --mac-address 11:22:33:44:55:66

There is reply to client:
'unicode' object has no attribute 'get' (it's a separate bug about unclear 
error message in client)

And the traceback in neutron:

2014-08-18 10:12:49.755 ERROR neutron.api.v2.resource 
[req-ca20ca88-3dec-4445-9e9c-6eb73c343474 demo 
834b2e7732cb4ad4b3df81fe0b0ea906] create failed
2014-08-18 10:12:49.755 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
File "/opt/stack/neutron/neutron/api/v2/resource.py", line 87, in resource
  result = method(request=request, **args)
File "/opt/stack/neutron/neutron/api/v2/base.py", line 448, in create
  obj = obj_creator(request.context, **kwargs)
File "/opt/stack/neutron/neutron/plugins/ml2/plugin.py", line 805, in 
create_port
  result = super(Ml2Plugin, self).create_port(context, port)
File "/opt/stack/neutron/neutron/db/db_base_plugin_v2.py", line 1301, in 
create_port
  context.session.add(allocated)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 447, in 
__exit__
  self.rollback()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
58, in __exit__
  compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 444, in 
__exit__
  self.commit()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 354, in 
commit
  self._prepare_impl()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 334, in 
_prepare_impl
  self.session.flush()
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1818, 
in flush
  self._flush(objects)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1936, 
in _flush
  transaction.rollback(_capture_exception=True)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/util/langhelpers.py", line 
58, in __exit__
  compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 1900, 
in _flush
  flush_context.execute()  
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 372, 
in execute
  rec.execute(self)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/unitofwork.py", line 525, 
in execute
  uow
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 64, 
in save_obj
  table, insert)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/orm/persistence.py", line 
541, in _emit_insert_statements
  execute(statement, multiparams)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 662, in 
execute
  params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 761, in 
_execute_clauseelement
  compiled_sql, distilled_params
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 874, in 
_execute_context
  context)
File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/compat/handle_error.py",
 line 125, in _handle_dbapi_exception
  six.reraise(type(newraise), newraise, sys.exc_info()[2])
File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/compat/handle_error.py",
 line 102, in _handle_dbapi_exception
  per_fn = fn(ctx)
File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/exc_filters.py", 
line 323, in handler
  context.is_disconnect)
File 
"/usr/local/lib/python2.7/dist-packages/oslo/db/sqlalchemy/exc_filters.py", 
line 145, in _default_dupe_key_error
  raise exception.DBDuplicateEntry(columns, integrity_error, value)
 
 neutron.api.v2.resource DBDuplicateEntry: (IntegrityError) (1062, "Duplicate 
entry '2014::1322:33ff:fe44:5566-1d6fcc3d-0c55-4bdf-9e7f-5173df8d5fda-4' for 
key 'PRIMARY'") 'INSERT INTO ipallocations (port_id, ip_address, subnet_id, 
network_id) VALUES (%s, %s, %s, %s)' ('acc56c30-b685-4826-b169-cb0e3cdbc3cd', 
'2014::1322:33ff:fe44:5566', '1d6fcc3d-0c55-4bdf-9e7f-5173df8d5fda', 
'42d6d00e-697f-4d28-8c74-678e30034490')

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358731

Title:
  SLAAC IP address is not checked for duplicates

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  SLAAC IPv6 address should be checked for duplicates before apply

[Yahoo-eng-team] [Bug 1358719] [NEW] Live migration fails as get_instance_disk_info is not present in the compute driver base class

2014-08-19 Thread Alessandro Pilotti
Public bug reported:

The "get_instance_disk_info" driver has been added to the libvirt
compute driver in the following commit:

https://github.com/openstack/nova/commit/e4974769743d5967626c1f0415113683411a03a4

This caused regression failures on drivers that do not implement it,
e.g.:

http://paste.openstack.org/show/97258/

The method has been subsequently added to the base class which, but
raising a NotImplementedError(), which still causes the regression:

https://github.com/openstack/nova/commit/2bed16c89356554a193a111d268a9587709ed2f7

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: hyper-v

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358719

Title:
  Live migration fails as get_instance_disk_info is not present in the
  compute driver base class

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  The "get_instance_disk_info" driver has been added to the libvirt
  compute driver in the following commit:

  
https://github.com/openstack/nova/commit/e4974769743d5967626c1f0415113683411a03a4

  This caused regression failures on drivers that do not implement it,
  e.g.:

  http://paste.openstack.org/show/97258/

  The method has been subsequently added to the base class which, but
  raising a NotImplementedError(), which still causes the regression:

  
https://github.com/openstack/nova/commit/2bed16c89356554a193a111d268a9587709ed2f7

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358718] [NEW] duplicate ping packets from dhcp namespace when pinging across DVR subnet VMs

2014-08-19 Thread Sarada
Public bug reported:

1. have a multi node devstack setup in which 1 Controller, 1 NN & 2CNs
2. Create two networks & subnets within it.
net1 - 10.1.10.0/24
net2 10.1.8.0/24

3. Create a distributed router. Add two interfaces to the DVR.
4. Spawn VM1 in net1 & host it on CN1.
5. Spawn VM2 in net2 & host it on CN2.
6. login to NN & from net1 dhcp namespace try to ping VM2 which is part of net2.

As shown below we can see duplicate ping packets.

stack@NN:~/devstack$ sudo ip netns exec 
qdhcp-111de30b-cedf-492d-88b3-5a5fc2a92f4d ifconfig
loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:4 errors:0 dropped:0 overruns:0 frame:0
  TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:328 (328.0 B)  TX bytes:328 (328.0 B)

tap68b11c40-f9 Link encap:Ethernet  HWaddr fa:16:3e:87:67:20
  inet addr:10.1.10.3  Bcast:10.1.10.255  Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:fe87:6720/64 Scope:Link
  UP BROADCAST RUNNING  MTU:1500  Metric:1
  RX packets:179 errors:0 dropped:0 overruns:0 frame:0
  TX packets:104 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:16358 (16.3 KB)  TX bytes:10284 (10.2 KB)

stack@NN:~/devstack$ sudo ip netns exec 
qdhcp-111de30b-cedf-492d-88b3-5a5fc2a92f4d ping 10.1.8.2
PING 10.1.8.2 (10.1.8.2) 56(84) bytes of data.
64 bytes from 10.1.8.2: icmp_req=1 ttl=63 time=3.11 ms
64 bytes from 10.1.8.2: icmp_req=1 ttl=63 time=3.13 ms (DUP!)
64 bytes from 10.1.8.2: icmp_req=2 ttl=63 time=0.515 ms
64 bytes from 10.1.8.2: icmp_req=2 ttl=63 time=0.537 ms (DUP!)
64 bytes from 10.1.8.2: icmp_req=3 ttl=63 time=0.362 ms
64 bytes from 10.1.8.2: icmp_req=3 ttl=63 time=0.385 ms (DUP!)
64 bytes from 10.1.8.2: icmp_req=4 ttl=63 time=0.262 ms
64 bytes from 10.1.8.2: icmp_req=4 ttl=63 time=0.452 ms (DUP!)
^C
--- 10.1.8.2 ping statistics ---
4 packets transmitted, 4 received, +4 duplicates, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.262/1.094/3.132/1.174 ms
stack@qatst231:~/devstack$

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358718

Title:
  duplicate ping packets from dhcp namespace when pinging across DVR
  subnet  VMs

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  1. have a multi node devstack setup in which 1 Controller, 1 NN & 2CNs
  2. Create two networks & subnets within it.
  net1 - 10.1.10.0/24
  net2 10.1.8.0/24

  3. Create a distributed router. Add two interfaces to the DVR.
  4. Spawn VM1 in net1 & host it on CN1.
  5. Spawn VM2 in net2 & host it on CN2.
  6. login to NN & from net1 dhcp namespace try to ping VM2 which is part of 
net2.

  As shown below we can see duplicate ping packets.

  stack@NN:~/devstack$ sudo ip netns exec 
qdhcp-111de30b-cedf-492d-88b3-5a5fc2a92f4d ifconfig
  loLink encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:328 (328.0 B)  TX bytes:328 (328.0 B)

  tap68b11c40-f9 Link encap:Ethernet  HWaddr fa:16:3e:87:67:20
inet addr:10.1.10.3  Bcast:10.1.10.255  Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe87:6720/64 Scope:Link
UP BROADCAST RUNNING  MTU:1500  Metric:1
RX packets:179 errors:0 dropped:0 overruns:0 frame:0
TX packets:104 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:16358 (16.3 KB)  TX bytes:10284 (10.2 KB)

  stack@NN:~/devstack$ sudo ip netns exec 
qdhcp-111de30b-cedf-492d-88b3-5a5fc2a92f4d ping 10.1.8.2
  PING 10.1.8.2 (10.1.8.2) 56(84) bytes of data.
  64 bytes from 10.1.8.2: icmp_req=1 ttl=63 time=3.11 ms
  64 bytes from 10.1.8.2: icmp_req=1 ttl=63 time=3.13 ms (DUP!)
  64 bytes from 10.1.8.2: icmp_req=2 ttl=63 time=0.515 ms
  64 bytes from 10.1.8.2: icmp_req=2 ttl=63 time=0.537 ms (DUP!)
  64 bytes from 10.1.8.2: icmp_req=3 ttl=63 time=0.362 ms
  64 bytes from 10.1.8.2: icmp_req=3 ttl=63 time=0.385 ms (DUP!)
  64 bytes from 10.1.8.2: icmp_req=4 ttl=63 time=0.262 ms
  64 bytes from 10.1.8.2: icmp_req=4 ttl=63 time=0.452 ms (DUP!)
  ^C
  --- 10.1.8.2 ping statistics ---
  4 packets transmitted, 4 received, +4 duplicates, 0% packet loss, time 2999ms
  rtt min/avg/max/mdev = 0.262/1.094/3.132/1.174 ms
  stack@qatst231:~/devstack$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358718/+subscriptio

[Yahoo-eng-team] [Bug 1330065] Re: VMWare - Driver does not ignore Datastore in maintenance mode

2014-08-19 Thread Davanum Srinivas (DIMS)
** Also affects: oslo.vmware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1330065

Title:
  VMWare - Driver does not ignore Datastore in maintenance mode

Status in OpenStack Compute (Nova):
  In Progress
Status in The OpenStack VMwareAPI subTeam:
  New
Status in Oslo VMware library for OpenStack projects:
  New

Bug description:
  A datastore can be in maintenance mode. The driver does not ignore it
  both in stats update and while spawing instances.

  During stats update, a wrong stats update is returned if a datastore
  is in maintenance mode.

  Also during spawing, if a datastore in maintenance mode gets choosen,
  since it had the largest disk space, the spawn would fail.

  The driver should ignore datastore in maintenance mode

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1330065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358709] [NEW] SLAAC IPv6 addressing doesn't work with more than one subnet

2014-08-19 Thread Sergey Shnaidman
Public bug reported:

When network has more than one IPv6 SLAAC (or dhcp-stateless) subnets,
the port receives SLAAC address only from first one, the second address
is from fixed IPs range.

Scenario:
1) create a network and two SLAAC subnets:
~$ neutron net-create net12
~$ neutron subnet-create net12 --ipv6-ra-mode=slaac --ipv6-address-mode=slaac 
--ip-version=6 2003::/64
| allocation_pools  | {"start": "2003::2", "end": "2003:::::fffe"} |
| cidr  | 2003::/64|
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 2003::1  |
| host_routes   |  |
| id| 220b7e4e-b30a-4d5c-847d-58df72bf7e8d |
| ip_version| 6|
| ipv6_address_mode | slaac|
| ipv6_ra_mode  | slaac|
| name  |  |
| network_id| 4cfe1699-a10d-4706-bedb-5680cb5cf27f |
| tenant_id | 834b2e7732cb4ad4b3df81fe0b0ea906 |

~$ neutron subnet-create --name=additional net12 --ipv6-ra-mode=slaac 
--ipv6-address-mode=slaac --ip-version=6 2004::/64
| allocation_pools  | {"start": "2004::2", "end": "2004:::::fffe"} |
| cidr  | 2004::/64|
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 2004::1  |
| host_routes   |  |
| id| e48e5d96-565f-45b1-8efc-4634d3ed8bf8 |
| ip_version| 6|
| ipv6_address_mode | slaac|
| ipv6_ra_mode  | slaac|
| name  | additional   |
| network_id| 4cfe1699-a10d-4706-bedb-5680cb5cf27f |
| tenant_id | 834b2e7732cb4ad4b3df81fe0b0ea906 |

Now let's create port in this network:

~$ neutron port-create net12
Created a new port:
+---+--+
| Field | Value 
   |
+---+--+
| admin_state_up| True  
   |
| allowed_address_pairs |   
   |
| binding:vnic_type | normal
   |
| device_id |   
   |
| device_owner  |   
   |
| fixed_ips | {"subnet_id": "220b7e4e-b30a-4d5c-847d-58df72bf7e8d", 
"ip_address": "2003::f816:3eff:fe55:6297"} |
|   | {"subnet_id": "e48e5d96-565f-45b1-8efc-4634d3ed8bf8", 
"ip_address": "2004::2"}   |
| id| 12c29fd4-1c68-4aea-88c6-b89d73ebac2c  
   |
| mac_address   | fa:16:3e:55:62:97 
   |
| name  |   
   |
| network_id| 4cfe1699-a10d-4706-bedb-5680cb5cf27f  
   |
| security_groups   | 65e77cc0-879c-4ed0-b647-d27d36844e0b  
   |
| status| DOWN  
   |
| tenant_id | 834b2e7732cb4ad4b3df81fe0b0ea906  
   |

As we see we get SLAAC IP from first subnet and fixed IP from second one.
Expected is to get SLAAC IPs from both subnets.

** Affects: neutron
 Importance: Und

[Yahoo-eng-team] [Bug 1358702] [NEW] Hyper-V unit test fails on Windows due to path separator inconsistency: nova.tests.virt.hyperv.test_pathutils.PathUtilsTestCase.test_lookup_config_drive_path

2014-08-19 Thread Alessandro Pilotti
Public bug reported:

The following test fails due to mismatching in the path separator.

FAIL: 
nova.tests.virt.hyperv.test_pathutils.PathUtilsTestCase.test_lookup_config_drive_path
--
_StringException: Empty attachments:
  pythonlogging:''

Traceback (most recent call last):
  File "C:\OpenStack\nova\nova\tests\virt\hyperv\test_pathutils.py", line 48, i
 test_lookup_configdrive_path
format_ext)
  File "C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\
ite-packages\testtools\testcase.py", line 321, in assertEqual
self.assertThat(observed, matcher, message)
  File "C:\Program Files (x86)\Cloudbase Solutions\OpenStack\Nova\Python27\lib\
ite-packages\testtools\testcase.py", line 406, in assertThat
raise mismatch_error
MismatchError: !=:
reference = 'C:/fake_instance_dir\\configdrive.vhd'
actual= 'C:/fake_instance_dir/configdrive.vhd'

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: hyper-v

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358702

Title:
  Hyper-V unit test fails on Windows due to path separator
  inconsistency:
  
nova.tests.virt.hyperv.test_pathutils.PathUtilsTestCase.test_lookup_config_drive_path

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  The following test fails due to mismatching in the path separator.

  FAIL: 
nova.tests.virt.hyperv.test_pathutils.PathUtilsTestCase.test_lookup_config_drive_path
  --
  _StringException: Empty attachments:
pythonlogging:''

  Traceback (most recent call last):
File "C:\OpenStack\nova\nova\tests\virt\hyperv\test_pathutils.py", line 48, 
i
   test_lookup_configdrive_path
  format_ext)
File "C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\
  ite-packages\testtools\testcase.py", line 321, in assertEqual
  self.assertThat(observed, matcher, message)
File "C:\Program Files (x86)\Cloudbase 
Solutions\OpenStack\Nova\Python27\lib\
  ite-packages\testtools\testcase.py", line 406, in assertThat
  raise mismatch_error
  MismatchError: !=:
  reference = 'C:/fake_instance_dir\\configdrive.vhd'
  actual= 'C:/fake_instance_dir/configdrive.vhd'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358702/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274758] Re: Error during ComputeManager.update_available_resource: 'NoneType' object has no attribute '__getitem__'

2014-08-19 Thread Alan Pevec
** Changed in: nova
   Status: Fix Committed => Fix Released

** Changed in: nova/havana
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1274758

Title:
  Error during ComputeManager.update_available_resource: 'NoneType'
  object has no attribute '__getitem__'

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  In Progress

Bug description:
  Error during ComputeManager.update_available_resource: 'NoneType'
  object has no attribute '__getitem__'

  
  ERROR nova.openstack.common.periodic_task [-] Error during 
ComputeManager.update_available_resource: 'NoneType' object has no attribute 
'__getitem__'
  TRACE nova.openstack.common.periodic_task Traceback (most recent call last):
  TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/new/nova/nova/openstack/common/periodic_task.py", line 182, in 
run_periodic_tasks
  TRACE nova.openstack.common.periodic_task task(self, context)
  TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 5049, in 
update_available_resource
  TRACE nova.openstack.common.periodic_task 
rt.update_available_resource(context)
  TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/new/nova/nova/openstack/common/lockutils.py", line 249, in inner
  TRACE nova.openstack.common.periodic_task return f(*args, **kwargs)   
  TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 300, in 
update_available_resource
  TRACE nova.openstack.common.periodic_task resources = 
self.driver.get_available_resource(self.nodename)
  TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3943, in 
get_available_resource
  TRACE nova.openstack.common.periodic_task stats = 
self.host_state.get_host_stats(refresh=True)
  TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5016, in get_host_stats
  TRACE nova.openstack.common.periodic_task self.update_status()
  TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5052, in update_status
  TRACE nova.openstack.common.periodic_task data["vcpus_used"] = 
self.driver.get_vcpu_used()
  TRACE nova.openstack.common.periodic_task   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3626, in get_vcpu_used
  TRACE nova.openstack.common.periodic_task total += len(vcpus[1])
  TRACE nova.openstack.common.periodic_task TypeError: 'NoneType' object has no 
attribute '__getitem__'
  TRACE nova.openstack.common.periodic_task 

  
  
http://logs.openstack.org/51/63551/6/gate/gate-tempest-dsvm-postgres-full/4860441/logs/screen-n-cpu.txt.gz?level=TRACE#_2014-01-30_22_38_29_401

  Seen in the gate

  logstash query: message:"TypeError: 'NoneType' object has no attribute
  '__getitem__'" AND filename:"logs/screen-n-cpu.txt"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1274758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358668] [NEW] Big Switch: keyerror on filtered get_ports call

2014-08-19 Thread Kevin Benton
Public bug reported:

If get_ports is called in the Big Switch plugin without 'id' being one
of the included fields, _extend_port_dict_binding will fail with the
following error.

Traceback (most recent call last):
  File "neutron/tests/unit/bigswitch/test_restproxy_plugin.py", line 87, in 
test_get_ports_no_id
context.get_admin_context(), fields=['name'])
  File "neutron/plugins/bigswitch/plugin.py", line 715, in get_ports
self._extend_port_dict_binding(context, port)
  File "neutron/plugins/bigswitch/plugin.py", line 361, in 
_extend_port_dict_binding
hostid = porttracker_db.get_port_hostid(context, port['id'])
KeyError: 'id'

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358668

Title:
  Big Switch: keyerror on filtered get_ports call

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  If get_ports is called in the Big Switch plugin without 'id' being one
  of the included fields, _extend_port_dict_binding will fail with the
  following error.

  Traceback (most recent call last):
File "neutron/tests/unit/bigswitch/test_restproxy_plugin.py", line 87, in 
test_get_ports_no_id
  context.get_admin_context(), fields=['name'])
File "neutron/plugins/bigswitch/plugin.py", line 715, in get_ports
  self._extend_port_dict_binding(context, port)
File "neutron/plugins/bigswitch/plugin.py", line 361, in 
_extend_port_dict_binding
  hostid = porttracker_db.get_port_hostid(context, port['id'])
  KeyError: 'id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358667] [NEW] Don't need judge "suffix" in _create_image method

2014-08-19 Thread ugvddm
Public bug reported:

I don't think we need judge "suffix" in _create_image method:
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2715
because it just waste time, is that a bug?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358667

Title:
  Don't need judge "suffix" in _create_image method

Status in OpenStack Compute (Nova):
  New

Bug description:
  I don't think we need judge "suffix" in _create_image method:
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L2715
  because it just waste time, is that a bug?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1358667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328375] Re: The 'x-openstack-request-id' from cinder cannot be output to the log.

2014-08-19 Thread Takashi NATSUME
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1328375

Title:
  The 'x-openstack-request-id' from cinder cannot be output to the log.

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Cinder returns a response including 'x-openstack-request-id' in the HTTP 
response header when nova calls cinder.
  But nova cannot output 'x-openstack-request-id' to the log( if the call is 
successful).
  If nova outputs 'x-openstack-request-id' to the log, it will enable us to 
perform the analysis more efficiently.

  Before:
  

  2014-06-10 10:34:13.636 DEBUG nova.volume.cinder 
[req-6ff36d30-8a39-499a-b40c-ea9ca8dafc25 admin admin] Cinderclient connection 
created using URL: http://10.0.2.15:8776/v1/5b25b7114cd34d41a9415bbc47a07c81 
cinderclient /opt/stack/nova/nova/volume/cinder.py:94
  2014-06-10 10:34:13.640 INFO urllib3.connectionpool 
[req-6ff36d30-8a39-499a-b40c-ea9ca8dafc25 admin admin] Starting new HTTP 
connection (1): 10.0.2.15
  2014-06-10 10:34:13.641 DEBUG urllib3.connectionpool 
[req-6ff36d30-8a39-499a-b40c-ea9ca8dafc25 admin admin] Setting read timeout to 
None _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:375
  2014-06-10 10:34:16.381 DEBUG urllib3.connectionpool 
[req-6ff36d30-8a39-499a-b40c-ea9ca8dafc25 admin admin] "POST 
/v1/5b25b7114cd34d41a9415bbc47a07c81/volumes/e4fe2d26-fccb-475e-9992-c8e25a418118/action
 HTTP/1.1" 200 447 _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:415
  


  After:
  

  2014-06-10 13:40:19.423 DEBUG nova.volume.cinder 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] Cinderclient connection 
created using URL: http://10.0.2.15:8776/v1/d35af2c7a90581879aecbc448203 
cinderclient /opt/stack/nova/nova/volume/cinder.py:97
  (snipped...)
  2014-06-10 13:40:19.424 DEBUG cinderclient.client 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] 
  REQ: curl -i 
http://10.0.2.15:8776/v1/d35af2c7a90581879aecbc448203/volumes/7a7d47c7-b31d-41bb-874f-f37dd175a4a4/action
 -X POST -H "X-Auth-Project-Id: d35af2c7a90581879aecbc448203" -H 
"User-Agent: python-cinderclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: (snipped...)" -d '{"os-attach": 
{"instance_uuid": "cad01ef1-2728-4a9a-b4d6-da1a783a627b", "mountpoint": 
"/dev/vdb", "mode": "rw"}}'
   http_log_req /opt/stack/python-cinderclient/cinderclient/client.py:130
  2014-06-10 13:40:19.427 INFO urllib3.connectionpool 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] Starting new HTTP 
connection (1): 10.0.2.15
  2014-06-10 13:40:19.428 DEBUG urllib3.connectionpool 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] Setting read timeout to 
None _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:375
  2014-06-10 13:40:19.909 DEBUG urllib3.connectionpool 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] "POST 
/v1/d35af2c7a90581879aecbc448203/volumes/7a7d47c7-b31d-41bb-874f-f37dd175a4a4/action
 HTTP/1.1" 202 0 _make_request 
/usr/lib/python2.7/dist-packages/urllib3/connectionpool.py:415
  (snipped...)
  2014-06-10 13:40:19.910 DEBUG cinderclient.client 
[req-bfd42ba2-da4a-4687-8a60-d2a5eca7b88a admin admin] RESP: [202] 
CaseInsensitiveDict({'date': 'Tue, 10 Jun 2014 04:40:19 GMT', 'content-length': 
'0', 'content-type': 'text/html; charset=UTF-8', 'x-openstack-request-id': 
'req-b0e7bccf-cc70-4646-93c0-bd94090cc5f0'})
  RESP BODY: 
   http_log_resp /opt/stack/python-cinderclient/cinderclient/client.py:139
  


To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1328375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358636] [NEW] Test tempest.api.network.admin.test_l3_agent_scheduler.L3AgentSchedulerTestXML.test_add_list_remove_router_on_l3_agent fails

2014-08-19 Thread Sergey Kraynev
Public bug reported:

Test
tempest.api.network.admin.test_l3_agent_scheduler.L3AgentSchedulerTestXML.test_add_list_remove_router_on_l3_agent
fails for job gate-tempest-dsvm-neutron-full with traceback:

ft335.1: 
tempest.api.network.admin.test_l3_agent_scheduler.L3AgentSchedulerTestXML.test_add_list_remove_router_on_l3_agent[gate,smoke]_StringException:
 Empty attachments:
  stderr
  stdout

pythonlogging:'': {{{
2014-08-18 15:31:39,865 535 INFO [tempest.common.rest_client] Request 
(L3AgentSchedulerTestXML:test_add_list_remove_router_on_l3_agent): 200 POST 
http://127.0.0.1:5000/v2.0/tokens
2014-08-18 15:31:40,148 535 INFO [tempest.common.rest_client] Request 
(L3AgentSchedulerTestXML:test_add_list_remove_router_on_l3_agent): 201 POST 
http://127.0.0.1:9696/v2.0/routers 0.282s
2014-08-18 15:31:40,259 535 INFO [tempest.common.rest_client] Request 
(L3AgentSchedulerTestXML:test_add_list_remove_router_on_l3_agent): 409 POST 
http://127.0.0.1:9696/v2.0/agents/47dd83c6-f92d-40d9-8601-5a38b6b9eda0/l3-routers
 0.109s
2014-08-18 15:31:40,714 535 INFO [tempest.common.rest_client] Request 
(L3AgentSchedulerTestXML:_run_cleanups): 204 DELETE 
http://127.0.0.1:9696/v2.0/routers/fd7d082c-71db-4fa0-bd0c-0b31acef9b1a 0.450s
}}}

Traceback (most recent call last):
  File "tempest/api/network/admin/test_l3_agent_scheduler.py", line 66, in 
test_add_list_remove_router_on_l3_agent
self.agent['id'], router['router']['id'])
  File "tempest/services/network/xml/network_client.py", line 218, in 
add_router_to_l3_agent
resp, body = self.post(uri, str(common.Document(router)))
  File "tempest/services/network/network_client_base.py", line 73, in post
return self.rest_client.post(uri, body, headers)
  File "tempest/common/rest_client.py", line 219, in post
return self.request('POST', url, extra_headers, headers, body)
  File "tempest/common/rest_client.py", line 431, in request
resp, resp_body)
  File "tempest/common/rest_client.py", line 485, in _error_checker
raise exceptions.Conflict(resp_body)
Conflict: An object with that identifier already exists
Details: {'message': 'The router fd7d082c-71db-4fa0-bd0c-0b31acef9b1a has been 
already hosted by the L3 Agent 47dd83c6-f92d-40d9-8601-5a38b6b9eda0.', 'type': 
'RouterHostedByL3Agent', 'detail': {}}

There is log of test results:
http://logs.openstack.org/43/97543/10/gate/gate-tempest-dsvm-neutron-
full/8f09c74/logs/testr_results.html.gz

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358636

Title:
  Test
  
tempest.api.network.admin.test_l3_agent_scheduler.L3AgentSchedulerTestXML.test_add_list_remove_router_on_l3_agent
  fails

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Test
  
tempest.api.network.admin.test_l3_agent_scheduler.L3AgentSchedulerTestXML.test_add_list_remove_router_on_l3_agent
  fails for job gate-tempest-dsvm-neutron-full with traceback:

  ft335.1: 
tempest.api.network.admin.test_l3_agent_scheduler.L3AgentSchedulerTestXML.test_add_list_remove_router_on_l3_agent[gate,smoke]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2014-08-18 15:31:39,865 535 INFO [tempest.common.rest_client] Request 
(L3AgentSchedulerTestXML:test_add_list_remove_router_on_l3_agent): 200 POST 
http://127.0.0.1:5000/v2.0/tokens
  2014-08-18 15:31:40,148 535 INFO [tempest.common.rest_client] Request 
(L3AgentSchedulerTestXML:test_add_list_remove_router_on_l3_agent): 201 POST 
http://127.0.0.1:9696/v2.0/routers 0.282s
  2014-08-18 15:31:40,259 535 INFO [tempest.common.rest_client] Request 
(L3AgentSchedulerTestXML:test_add_list_remove_router_on_l3_agent): 409 POST 
http://127.0.0.1:9696/v2.0/agents/47dd83c6-f92d-40d9-8601-5a38b6b9eda0/l3-routers
 0.109s
  2014-08-18 15:31:40,714 535 INFO [tempest.common.rest_client] Request 
(L3AgentSchedulerTestXML:_run_cleanups): 204 DELETE 
http://127.0.0.1:9696/v2.0/routers/fd7d082c-71db-4fa0-bd0c-0b31acef9b1a 0.450s
  }}}

  Traceback (most recent call last):
File "tempest/api/network/admin/test_l3_agent_scheduler.py", line 66, in 
test_add_list_remove_router_on_l3_agent
  self.agent['id'], router['router']['id'])
File "tempest/services/network/xml/network_client.py", line 218, in 
add_router_to_l3_agent
  resp, body = self.post(uri, str(common.Document(router)))
File "tempest/services/network/network_client_base.py", line 73, in post
  return self.rest_client.post(uri, body, headers)
File "tempest/common/rest_client.py", line 219, in post
  return self.request('POST', url, extra_headers, headers, body)
File "tempest/common/rest_client.py", line 431, in request
  resp, resp_body)
File "tempest/common/rest_client.py", line 485, in _error_checker
  raise exceptions.Conflict(resp_body)
  Conflict: An object wi