[Yahoo-eng-team] [Bug 1367596] [NEW] admin_state_up=False on l3-agent doesn't affect active routers

2014-09-09 Thread Yair Fried
Public bug reported:

When cloud admin is shutting down l3-agent via API (without evacuating routers 
first) it stands to reason he would like traffic to stop routing via this agent 
(either for maintenance or maybe because a security breach was found...).
Currently, even when an agent is down, traffic keeps routing without any affect.

Agent should set all interfaces inside router namespace  to DOWN so no
more traffic is being routed.

Alternatively, agent should set routers admin state to DOWN, assuming
this actually affects the router.

Either way, end result should be - traffic is not routed via agent when
admin brings it down.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  When cloud admin is shutting down l3-agent via API (without evacuating 
routers first) it stands to reason he would like traffic to stop routing via 
this agent (either for maintenance or maybe because a security breach was 
found...).
  Currently, even when an agent is down, traffic keeps routing without any 
effect.
  
  Agent should set all interfaces inside router namespace  to DOWN so no
  more traffic is being routed.
  
  Alternatively, agent should set routers admin state to DOWN, assuming
- this actually effects the router.
+ this actually affects the router.
  
  Either way, end result should be - traffic is not routed via agent when
  admin brings it down.

** Description changed:

  When cloud admin is shutting down l3-agent via API (without evacuating 
routers first) it stands to reason he would like traffic to stop routing via 
this agent (either for maintenance or maybe because a security breach was 
found...).
- Currently, even when an agent is down, traffic keeps routing without any 
effect.
+ Currently, even when an agent is down, traffic keeps routing without any 
affect.
  
  Agent should set all interfaces inside router namespace  to DOWN so no
  more traffic is being routed.
  
  Alternatively, agent should set routers admin state to DOWN, assuming
  this actually affects the router.
  
  Either way, end result should be - traffic is not routed via agent when
  admin brings it down.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367596

Title:
  admin_state_up=False on l3-agent doesn't affect active routers

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When cloud admin is shutting down l3-agent via API (without evacuating 
routers first) it stands to reason he would like traffic to stop routing via 
this agent (either for maintenance or maybe because a security breach was 
found...).
  Currently, even when an agent is down, traffic keeps routing without any 
affect.

  Agent should set all interfaces inside router namespace  to DOWN so no
  more traffic is being routed.

  Alternatively, agent should set routers admin state to DOWN, assuming
  this actually affects the router.

  Either way, end result should be - traffic is not routed via agent
  when admin brings it down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367590] [NEW] File exists: '/opt/stack/new/horizon/static/scss/assets'

2014-09-09 Thread Joshua Harlow
Public bug reported:

Seems like some kind of asset problem is occurring that is breaking the
integrated gate.

File exists: '/opt/stack/new/horizon/static/scss/assets'

http://logs.openstack.org/81/120281/3/check/check-tempest-dsvm-neutron-
full/fbe5341/logs/screen-horizon.txt.gz

This causes:

tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps  to the
fail...

http://logs.openstack.org/81/120281/3/check/check-tempest-dsvm-neutron-
full/fbe5341/logs/testr_results.html.gz

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367590

Title:
  File exists: '/opt/stack/new/horizon/static/scss/assets'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Seems like some kind of asset problem is occurring that is breaking
  the integrated gate.

  File exists: '/opt/stack/new/horizon/static/scss/assets'

  http://logs.openstack.org/81/120281/3/check/check-tempest-dsvm-
  neutron-full/fbe5341/logs/screen-horizon.txt.gz

  This causes:

  tempest.scenario.test_dashboard_basic_ops.TestDashboardBasicOps  to
  the fail...

  http://logs.openstack.org/81/120281/3/check/check-tempest-dsvm-
  neutron-full/fbe5341/logs/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367590/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367588] [NEW] When a VM with FloatingIP is directly deleted without disassociating a FIP, the fip agent gateway port is not deleted.

2014-09-09 Thread Swaminathan Vasudevan
Public bug reported:

When a VM with FloatingIP is deleted without disassociating a FIP, the
internal FIP agent gateway port on that particular compute node is not
deleted.

1. Create a dvr router.
2. Attach a subnet to the router
3. Attach a Gateway to the router
4. Create a Floating IP
5. Create a VM on the above subnet
6. Associate the Floating IP to the VM's private IP.
7. Now do a port-list you will see a port with device_owner as 
"router:floatingip_agent_gw"
8. Delete the VM ( nova delete VM-name).
9. The port still remains.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367588

Title:
  When a VM with FloatingIP is directly deleted without disassociating a
  FIP, the fip agent gateway port is not deleted.

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a VM with FloatingIP is deleted without disassociating a FIP, the
  internal FIP agent gateway port on that particular compute node is not
  deleted.

  1. Create a dvr router.
  2. Attach a subnet to the router
  3. Attach a Gateway to the router
  4. Create a Floating IP
  5. Create a VM on the above subnet
  6. Associate the Floating IP to the VM's private IP.
  7. Now do a port-list you will see a port with device_owner as 
"router:floatingip_agent_gw"
  8. Delete the VM ( nova delete VM-name).
  9. The port still remains.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367588/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367575] [NEW] Some server actions does not work for v2.1 API

2014-09-09 Thread Ghanshyam Mann
Public bug reported:

Below server action needs does not work for V2.1 API.
1. start server
2. stop server
3. confirm resize
4. revert resize

Those needs to be converted to V2.1 from V3 base code.

** Affects: nova
 Importance: Undecided
 Assignee: Ghanshyam Mann (ghanshyammann)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Ghanshyam Mann (ghanshyammann)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367575

Title:
  Some server actions does not work for v2.1 API

Status in OpenStack Compute (Nova):
  New

Bug description:
  Below server action needs does not work for V2.1 API.
  1. start server
  2. stop server
  3. confirm resize
  4. revert resize

  Those needs to be converted to V2.1 from V3 base code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367564] [NEW] metadata definition property show should handle type specific prefix

2014-09-09 Thread Travis Tripp
Public bug reported:

metadata definition property show should handle type specific prefix

The metadata definitions API supports listing namespaces by resource
type.  For example, you can list only namespaces applicable to images by
specifying OS::Glance::Image

The API also support showing namespace properties for a specific
resource type.  The API will automatically prepend any prefix specific
to that resource type.  For example, in the OS::Compute::VirtCPUTopology
namespace, the properties will come back with "hw_" prepended.

However, if you then ask the API to show the property with the prefix,
it will return a "not found" error.   To actually see the details of the
property, you have to know the base property (without the prefix).  It
would be nice if the API would attempt to auto-resolve any automatically
prefixed properties when showing a property.

This is evident from the command line.  If you look at the below
interactions, you will see the namespaces listed, then limited to a
particular resource type, then the properties shown for the namespace,
and then a failure to show the property using the automatically
prepended prefix.

* Apologize for formatting.

$ glance --os-image-api-version 2 md-namespace-list
++
| namespace  |
++
| OS::Compute::VMware|
| OS::Compute::XenAPI|
| OS::Compute::Quota |
| OS::Compute::Libvirt   |
| OS::Compute::Hypervisor|
| OS::Compute::Watchdog  |
| OS::Compute::HostCapabilities  |
| OS::Compute::Trust |
| OS::Compute::VirtCPUTopology   |
| OS::Glance:CommonImageProperties   |
| OS::Compute::RandomNumberGenerator |
++

$ glance --os-image-api-version 2 md-namespace-list --resource-type 
OS::Glance::Image
+--+
| namespace|
+--+
| OS::Compute::VMware  |
| OS::Compute::XenAPI  |
| OS::Compute::Libvirt |
| OS::Compute::Hypervisor  |
| OS::Compute::Watchdog|
| OS::Compute::VirtCPUTopology |
+--+

$ glance --os-image-api-version 2 md-namespace-show 
OS::Compute::VirtCPUTopology --resource-type OS::Glance::Image
++--+
| Property   | Value
|
++--+
| created_at | 2014-09-10T02:55:40Z 
|
| description| This provides the preferred socket/core/thread 
counts for the virtual CPU|
|| instance exposed to guests. This enables the 
ability to avoid hitting|
|| limitations on vCPU topologies that OS vendors 
place on their products. See  |
|| also: 
http://git.openstack.org/cgit/openstack/nova-specs/tree/specs/juno/virt-   |
|| driver-vcpu-topology.rst 
|
| display_name   | Virtual CPU Topology 
|
| namespace  | OS::Compute::VirtCPUTopology 
|
| owner  | admin
|
| properties | ["hw_cpu_cores", "hw_cpu_sockets", 
"hw_cpu_maxsockets", "hw_cpu_threads",|
|| "hw_cpu_maxcores", "hw_cpu_maxthreads"]  
|
| protected  | True 
|
| resource_type_associations | ["OS::Glance::Image", "OS::Cinder::Volume", 
"OS::Nova::Flavor"]  |
| schema | /v2/schemas/metadefs/namespace   
|
| visibility | public   
|
++--+

ttripp@ubuntu:/opt/stack/glance$ glance --os-image-api-version 2 
md-property-show OS::Compute::VirtCPUTopology hw_cpu_cores
Request returned failure status 404.

 
  404 Not Found
 
 
  404 Not Found
  Could not find property hw_cpu_cores

 
 (HTTP 404)

ttripp@ubuntu:/opt/stack/glance$ glance --os-image-api-version 2 
md-property-show OS::Compute::VirtCPUTopology cpu_cores
+-+---+
| Property 

[Yahoo-eng-team] [Bug 1367552] [NEW] Cisco test cases are using _do_side_effect() which has been renamed

2014-09-09 Thread Henry Gessau
Public bug reported:

In https://review.openstack.org/78880
NeutronDbPluginV2TestCase._do_side_effect()  was renamed to
_fail_second_call().

But some cisco nexus test cases still use the old name.

** Affects: neutron
 Importance: Undecided
 Assignee: Henry Gessau (gessau)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Henry Gessau (gessau)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367552

Title:
  Cisco test cases are using _do_side_effect() which has been renamed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In https://review.openstack.org/78880
  NeutronDbPluginV2TestCase._do_side_effect()  was renamed to
  _fail_second_call().

  But some cisco nexus test cases still use the old name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367552/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367548] [NEW] metadata definition VMware namespace has incorrect capitalization

2014-09-09 Thread Travis Tripp
Public bug reported:

metadata definition VMware namespace has incorrect capitalization

OS::Compute::VMwAre  should be OS::Compute::VMware


ttripp@ubuntu:~/devstack$ glance --os-image-api-version 2 md-namespace-list
++
| namespace  |
++
| OS::Compute::VMwAre|
etc etc etc

ttripp@ubuntu:~/devstack$ glance --os-image-api-version 2 md-namespace-show 
OS::Compute::VMwAre
++--+
| Property   | Value
|
++--+
| created_at | 2014-09-10T02:55:41Z 
|
| description| The VMware compute driver options.   
|
||  
|
|| These are properties specific to compute 
drivers.  For a list of all |
|| hypervisors, see here: 
https://wiki.openstack.org/wiki/HypervisorSupportMatrix.  |
| display_name   | VMware Driver Options
|
| namespace  | OS::Compute::VMwAre  
|
| owner  | admin
|
| properties | ["vmware_ostype", "vmware_adaptertype"]  
|
| protected  | True 
|
| resource_type_associations | ["OS::Glance::Image"]
|
| schema | /v2/schemas/metadefs/namespace   
|
| visibility | public   
|
++--+

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367548

Title:
  metadata definition VMware namespace has incorrect capitalization

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  metadata definition VMware namespace has incorrect capitalization

  OS::Compute::VMwAre  should be OS::Compute::VMware

  
  ttripp@ubuntu:~/devstack$ glance --os-image-api-version 2 md-namespace-list
  ++
  | namespace  |
  ++
  | OS::Compute::VMwAre|
  etc etc etc

  ttripp@ubuntu:~/devstack$ glance --os-image-api-version 2 md-namespace-show 
OS::Compute::VMwAre
  
++--+
  | Property   | Value  
  |
  
++--+
  | created_at | 2014-09-10T02:55:41Z   
  |
  | description| The VMware compute driver options. 
  |
  ||
  |
  || These are properties specific to compute 
drivers.  For a list of all |
  || hypervisors, see here: 
https://wiki.openstack.org/wiki/HypervisorSupportMatrix.  |
  | display_name   | VMware Driver Options  
  |
  | namespace  | OS::Compute::VMwAre
  |
  | owner  | admin  
  |
  | properties | ["vmware_ostype", "vmware_adaptertype"]
  |
  | protected  | True   
  |
  | resource_type_associations | ["OS::Glance::Image"]  
  |
  | schema 

[Yahoo-eng-team] [Bug 1367545] [NEW] metadef OS::Glance::CommonImageProperties missing colon ":

2014-09-09 Thread Travis Tripp
Public bug reported:


OS::Glance:CommonImageProperties should be OS::Glance::CommonImageProperties

Second set of "::" missing one ":"

See below.

glance --os-image-api-version 2 md-namespace-list
++
| namespace  |
++
| OS::Compute::VMwAre|
| OS::Compute::XenAPI|
| OS::Compute::Quota |
| OS::Compute::Libvirt   |
| OS::Compute::Hypervisor|
| OS::Compute::Watchdog  |
| OS::Compute::HostCapabilities  |
| OS::Compute::Trust |
| OS::Compute::VirtCPUTopology   |
| OS::Glance:CommonImageProperties   |
| OS::Compute::RandomNumberGenerator |
++

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367545

Title:
  metadef OS::Glance::CommonImageProperties missing colon ":

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  
  OS::Glance:CommonImageProperties should be OS::Glance::CommonImageProperties

  Second set of "::" missing one ":"

  See below.

  glance --os-image-api-version 2 md-namespace-list
  ++
  | namespace  |
  ++
  | OS::Compute::VMwAre|
  | OS::Compute::XenAPI|
  | OS::Compute::Quota |
  | OS::Compute::Libvirt   |
  | OS::Compute::Hypervisor|
  | OS::Compute::Watchdog  |
  | OS::Compute::HostCapabilities  |
  | OS::Compute::Trust |
  | OS::Compute::VirtCPUTopology   |
  | OS::Glance:CommonImageProperties   |
  | OS::Compute::RandomNumberGenerator |
  ++

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367542] [NEW] OpenStack Dashboard couldn't displayed nested stack very well

2014-09-09 Thread onelab
Public bug reported:

OpenStack Dashboard couldn't displayed nested stack very well.

Example,
Icehouse OpenStack Dasboard:

In panel "Project" ---> "Orchestration" ---> "Stacks", when clicked tab 
"Resources", the "Stack Resources" could be displayed very well, continued to 
click this column's text of "Stack Resource" which is a link in the tables, the 
tab "Overview" could be displayed very well. Then, clicked this text of 
"Resource ID" which is a link in the tab "Overview", the displayed content in 
the html is: 
"The page you were looking for doesn't exist

You may have mistyped the address or the page may have moved."


Root cause:

In the template project/stacks/templates/stacks/_resource_overview.html,
the "resource_url" need to be used.

{% trans "Resource ID" %}

  
  {{ resource.physical_resource_id }}
  


the value of resource_url is setted in class "ResourceOverviewTab" in
project/stacks/tabs.py, please see below code:

class ResourceOverviewTab(tabs.Tab):
name = _("Overview")
slug = "resource_overview"
template_name = "project/stacks/_resource_overview.html"

def get_context_data(self, request):
resource = self.tab_group.kwargs['resource']
resource_url = mappings.resource_to_url(resource)
return {
"resource": resource,
"resource_url": resource_url,
"metadata": self.tab_group.kwargs['metadata']}


the 'resource_urls' dictionary in project/stacks/mappings.py is out of date. 
It's not updated in Icehouse version, even earlier. some new concerns like 
Netron couldn't be found these.

resource_urls = {
"AWS::EC2::Instance": {
'link': 'horizon:project:instances:detail'},
"AWS::EC2::NetworkInterface": {
'link': 'horizon:project:networks:ports:detail'},
"AWS::EC2::RouteTable": {
'link': 'horizon:project:routers:detail'},
"AWS::EC2::Subnet": {
'link': 'horizon:project:networks:subnets:detail'},
"AWS::EC2::Volume": {
'link': 'horizon:project:volumes:volumes:detail'},
"AWS::EC2::VPC": {
'link': 'horizon:project:networks:detail'},
"AWS::S3::Bucket": {
'link': 'horizon:project:containers:index'},
"OS::Quantum::Net": {
'link': 'horizon:project:networks:detail'},
"OS::Quantum::Port": {
'link': 'horizon:project:networks:ports:detail'},
"OS::Quantum::Router": {
'link': 'horizon:project:routers:detail'},
"OS::Quantum::Subnet": {
'link': 'horizon:project:networks:subnets:detail'},
"OS::Swift::Container": {
'link': 'horizon:project:containers:index',
'format_pattern': '%s' + swift.FOLDER_DELIMITER},
} 

Since the "resource_type" could NOT match the type in "resource_urls", the 
value of "resource_url" in the template is "None". So we didn't find the 
correct html template. 
For example, the URL is like 
"http://10.10.0.3/dashboard/project/stacks/stack/[outer stack 
id]/[resource_name]/None". 
Note: We can get the resource by "resource_name", and the resource_type in the 
resource is user customized, and is nested stack actually.


What's more,  if we add new resource_type(in fact, it's quite frequent in real 
project), we must update the code for "resource_urls", it's tedious and error 
prone.
Since the heat template already support define a new resource_type based on the 
customer's requirement, the dashboard should keep consistent with it. 
It's not a good behavior always to update this dictionary manually. Shall we do 
an enhancement on this point?
Please help to check it. Thanks very much.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367542

Title:
  OpenStack Dashboard couldn't displayed nested stack very well

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  OpenStack Dashboard couldn't displayed nested stack very well.

  Example,
  Icehouse OpenStack Dasboard:

  In panel "Project" ---> "Orchestration" ---> "Stacks", when clicked tab 
"Resources", the "Stack Resources" could be displayed very well, continued to 
click this column's text of "Stack Resource" which is a link in the tables, the 
tab "Overview" could be displayed very well. Then, clicked this text of 
"Resource ID" which is a link in the tab "Overview", the displayed content in 
the html is: 
  "The page you were looking for doesn't exist

  You may have mistyped the address or the page may have moved."

  
  Root cause:

  In the template
  project/stacks/templates/stacks/_resource_overview.html, the
  "resource_url" need to be used.

  {% trans "Resource ID" %}
  

{{ resource.physical_resource_id }}

  

  the value of resource_url is setted in class "ResourceOverviewTab" in
  project/stacks/tabs.py, please see below code:

  cla

[Yahoo-eng-team] [Bug 1367540] [NEW] VMware: Failed to launch instance from volume

2014-09-09 Thread Thang Pham
Public bug reported:

If you create a volume from an image and launch an instance from it, the
instance fails to be created.

To recreate this bug:
1. Create a volume from an image
2. Launch an instance from the volume

The following error is thrown in n-cpu.log:

2014-09-09 22:33:47.037 ERROR nova.compute.manager 
[req-e17654a6-a58b-4760-a383-643dd054c691 demo demo] [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] Instance failed to spawn
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] Traceback (most recent call last):
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2171, in _build_resources
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] yield resources
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2050, in _build_and_run_instance
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] block_device_info=block_device_info)
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 446, in spawn
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] admin_password, network_info, 
block_device_info)
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 298, in spawn
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] vi = 
self._get_vm_config_info(instance, image_info, instance_name)
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 276, in _get_vm_config_info
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] image_info.file_size_in_gb > 
instance.root_gb):
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea]   File 
"/opt/stack/nova/nova/virt/vmwareapi/vmware_images.py", line 92, in 
file_size_in_gb
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] return self.file_size / units.Gi
2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] TypeError: unsupported operand type(s) 
for /: 'unicode' and 'int'

It appears that a simple conversion of the file_size to an int in
vmware_images.py should fix this.

** Affects: nova
 Importance: High
 Assignee: Thang Pham (thang-pham)
 Status: New


** Tags: vmware

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367540

Title:
  VMware: Failed to launch instance from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  If you create a volume from an image and launch an instance from it,
  the instance fails to be created.

  To recreate this bug:
  1. Create a volume from an image
  2. Launch an instance from the volume

  The following error is thrown in n-cpu.log:

  2014-09-09 22:33:47.037 ERROR nova.compute.manager 
[req-e17654a6-a58b-4760-a383-643dd054c691 demo demo] [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] Instance failed to spawn
  2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] Traceback (most recent call last):
  2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2171, in _build_resources
  2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] yield resources
  2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea]   File 
"/opt/stack/nova/nova/compute/manager.py", line 2050, in _build_and_run_instance
  2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] block_device_info=block_device_info)
  2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea]   File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 446, in spawn
  2014-09-09 22:33:47.037 32639 TRACE nova.compute.manager [instance: 
5ed921e8-c4d8-45a1-964a-93d09a43f2ea] admin_password, network_info, 
block_device_info)
  20

[Yahoo-eng-team] [Bug 1367523] [NEW] Evacute does not preserve data on shared storage

2014-09-09 Thread Aaron Smith
Public bug reported:

Environment:
  Centos6.5/KVM/IceHouse latest

Have two compute nodes with shared NFS storage.  Live migration and migration 
work with current configuration of
NFS/Nova/etc.  One one node a Cirros image is running with local root (no 
cinder volumes).  Write files to the local
filesytem.  Live migration and migration preserve the data on the local root 
drive.  

Shutdown the node with the running instance.

nova evacuate --on-shared-storage UUID target

The disk file in the instances directory is deleted and rebuilt, losing
customer data.

Looking into the nova/compute/manager.py file it looks like there is an issue 
with the recreate flag.
It is not passed into the rebuild driver function.  If I add the recreate flag 
to the dictionary args, the
evacuate works flawlessly.

kwargs = dict(
recreate=recreate,  <-- added
context=context,
instance=instance,
image_meta=image_meta,
injected_files=files,
admin_password=new_pass,
bdms=bdms,
detach_block_devices=detach_block_devices,
attach_block_devices=self._prep_block_device,
block_device_info=block_device_info,
network_info=network_info,
preserve_ephemeral=preserve_ephemeral)
try:
self.driver.rebuild(**kwargs)
except NotImplementedError:
# NOTE(rpodolyaka): driver doesn't provide specialized version
# of rebuild, fall back to the default implementation
self._rebuild_default_impl(**kwargs)
instance.power_state = self._get_power_state(context, instance)

This looks like a major oversight so perhaps the initial feature was not
meant to work this way?

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: evacuation ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367523

Title:
  Evacute does not preserve data on shared storage

Status in OpenStack Compute (Nova):
  New

Bug description:
  Environment:
Centos6.5/KVM/IceHouse latest

  Have two compute nodes with shared NFS storage.  Live migration and migration 
work with current configuration of
  NFS/Nova/etc.  One one node a Cirros image is running with local root (no 
cinder volumes).  Write files to the local
  filesytem.  Live migration and migration preserve the data on the local root 
drive.  

  Shutdown the node with the running instance.

  nova evacuate --on-shared-storage UUID target

  The disk file in the instances directory is deleted and rebuilt,
  losing customer data.

  Looking into the nova/compute/manager.py file it looks like there is an issue 
with the recreate flag.
  It is not passed into the rebuild driver function.  If I add the recreate 
flag to the dictionary args, the
  evacuate works flawlessly.

  kwargs = dict(
  recreate=recreate,  <-- added
  context=context,
  instance=instance,
  image_meta=image_meta,
  injected_files=files,
  admin_password=new_pass,
  bdms=bdms,
  detach_block_devices=detach_block_devices,
  attach_block_devices=self._prep_block_device,
  block_device_info=block_device_info,
  network_info=network_info,
  preserve_ephemeral=preserve_ephemeral)
  try:
  self.driver.rebuild(**kwargs)
  except NotImplementedError:
  # NOTE(rpodolyaka): driver doesn't provide specialized version
  # of rebuild, fall back to the default implementation
  self._rebuild_default_impl(**kwargs)
  instance.power_state = self._get_power_state(context, instance)

  This looks like a major oversight so perhaps the initial feature was
  not meant to work this way?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367500] [NEW] IPv6 network doesn't create namespace, dhcp port

2014-09-09 Thread Sergey Shnaidman
Public bug reported:

IPv6 networking has been changed during last commits. 
Create network and IPv6 subnet with default settings. Create port in the 
network:
it doesn't create any namespace, it doesn't create DHCP port in the subnet, 
although port get IP from DHCP server.
Although IPv4 networking continues to work as required.

$ neutron net-create netto
Created a new network:
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | True |
| id  | 849b4dbf-0914-4cfb-956b-e0cc5d8054ab |
| name| netto|
| router:external | False|
| shared  | False|
| status  | ACTIVE   |
| subnets |  |
| tenant_id   | 5664b23312504826818c9cb130a9a02f |
+-+--+

$ neutron subnet-create --ip-version 6 netto 2011::/64
Created a new subnet:
+---+--+
| Field | Value|
+---+--+
| allocation_pools  | {"start": "2011::2", "end": "2011:::::fffe"} |
| cidr  | 2011::/64|
| dns_nameservers   |  |
| enable_dhcp   | True |
| gateway_ip| 2011::1  |
| host_routes   |  |
| id| e10300d1-194f-4712-b2fc-2107ac3fe909 |
| ip_version| 6|
| ipv6_address_mode |  |
| ipv6_ra_mode  |  |
| name  |  |
| network_id| 849b4dbf-0914-4cfb-956b-e0cc5d8054ab |
| tenant_id | 5664b23312504826818c9cb130a9a02f |
+---+--+

$ neutron port-create netto
Created a new port:
+---++
| Field | Value 
 |
+---++
| admin_state_up| True  
 |
| allowed_address_pairs |   
 |
| binding:vnic_type | normal
 |
| device_id |   
 |
| device_owner  |   
 |
| fixed_ips | {"subnet_id": "e10300d1-194f-4712-b2fc-2107ac3fe909", 
"ip_address": "2011::2"} |
| id| 175eaa91-441e-48df-9267-bc7fc808dce8  
 |
| mac_address   | fa:16:3e:26:51:79 
 |
| name  |   
 |
| network_id| 849b4dbf-0914-4cfb-956b-e0cc5d8054ab  
 |
| security_groups   | c7756502-5eda-4f43-9977-21cfb73b4d4e  
 |
| status| DOWN  
 |
| tenant_id | 5664b23312504826818c9cb130a9a02f  
 |
+---++

$ neutron port-list | grep e10300d1-194f-4712-b2fc-2107ac3fe909
| 175eaa91-441e-48df-9267-bc7fc808dce8 |  | fa:16:3e:26:51:79 | 
{"subnet_id": "e10300d1-194f-4712-b2fc-2107ac3fe909", "ip_address": "2011::2"}  
|

there is no DHCP port

$ ip netns 
qrouter-b7f94a05-8b02-4221-9330-bb2d470f6b0c
(default namespace form devstack install)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ipv6

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpa

[Yahoo-eng-team] [Bug 966107] Re: Instances stuck in Image_snapshot/Queued should be cleaned up

2014-09-09 Thread Joe Gordon
Is this still valid? The blueprint was superseded.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/966107

Title:
  Instances stuck in Image_snapshot/Queued should be cleaned up

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  Scenario:
  If various systems like RabbitMQ, mysql DB server ot nova-compute go down 
during snapshot, the instance is stuck either in queued or Image_snapshot 
state. (Depending on the timing of when a specific component went down) 

  Expected Response:
  Instance should be eventually brought back to ACTIVE state.
  If there are snapshot entries in glance DB and/or on disk, they should be 
cleaned up.

  Actual Response:
  Instance remains stuck in Image_snapshot state.
  This is problematic because once it is stuck in this state, no snapshot is 
allowed on this instance till it returns in either ACTIVE or SHUTOFF state.

  in nova/compute/api.py
  @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.SHUTOFF])
  def snapshot(self, context, instance, name, extra_properties=None):


  Notes :
  This was reproduced forcefully for testing purposes by putting breakpoint at 
appropriate place(s) and then shutting down rabbitmq or mysql servers from 
other terminal window.

  Branch: master

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/966107/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 897095] Re: Nova doesn't support bridge on bonded interfaces

2014-09-09 Thread Joe Gordon
Is this still valid?

** Changed in: nova
   Status: Confirmed => Invalid

** Changed in: nova
   Status: Invalid => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/897095

Title:
  Nova doesn't support bridge on bonded interfaces

Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  I setup a bonding network. 
  so, I modified a my nova.conf.
  --flat_interface=bond1
  --fixed_range=10.101.0.0/16

  When I execute 'nova-network', nova-network make a bridge(br100)

  nova@cn2:/$ ifconfig bond1
  bond1 Link encap:Ethernet  HWaddr 10:1f:74:2b:89:3c
inet6 addr: fe80::121f:74ff:fe2b:893c/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
RX packets:14538 errors:0 dropped:370 overruns:0 frame:0
TX packets:3353 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1125692 (1.1 MB)  TX bytes:210490 (210.4 KB)

  nova@cn2:/$ ifconfig br100
  br100 Link encap:Ethernet  HWaddr 10:1f:74:2b:89:3c
inet addr:10.101.0.6  Bcast:10.101.1.255  Mask:255.255.254.0
inet6 addr: fe80::acd0:d6ff:fe13:e495/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:3254 errors:0 dropped:38 overruns:0 frame:0
TX packets:352 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:170306 (170.3 KB)  TX bytes:28092 (28.0 KB)

  then, I try to pinging to other compute node. but It can't connect.

  In my search result, It maybe ARP problem for bridge on a bonded interface.
  http://ubuntuforums.org/showthread.php?t=835732

  There are some solutions for this problem by turn on stp for bridge.

  nova@cn2:/$ sudo brctl stp br100 on

  after that command, ping to other compute node is success.


  And I found nova source code about create a bride.
  It's in 'nova/network/linux_net.py', I commented a source for stp option.

   949 if not _device_exists(bridge):
   950 LOG.debug(_('Starting Bridge interface for %s'), interface)
   951 _execute('brctl', 'addbr', bridge, run_as_root=True)
   952 _execute('brctl', 'setfd', bridge, 0, run_as_root=True)
   953 # _execute('brctl setageing %s 10' % bridge, 
run_as_root=True)
   954 #commented by jslee
   955 #_execute('brctl', 'stp', bridge, 'off', run_as_root=True)
   956 _execute('brctl', 'stp', bridge, 'on', run_as_root=True)
   957 # (danwent) bridge device MAC address can't be set directly.
   958 # instead it inherits the MAC address of the first device on 
the
   959 # bridge, which will either be the vlan interface, or a
   960 # physical NIC.
   961 _execute('ip', 'link', 'set', bridge, 'up', run_as_root=True)

  A genuine source is turn off a stp option, Are there some reason to
  turn off stp option?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/897095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326553] Re: Instance lock/unlock state is not presented anywhere

2014-09-09 Thread Joe Gordon
** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Status: Opinion => Confirmed

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326553

Title:
  Instance lock/unlock state is not presented anywhere

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  The "nova lock" and "nova unlock" commands will enable OpenStack's
  answer to EC2's "termination protection" functionality, which some
  users find handy, but the lock-state of an instance is not presented
  anywhere in the API.

  This causes confusion and panic when users to attempt actions on
  locked instances, and the actions fail for no discernible reason.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1326553/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1346741] Re: Enable "Stop Instance" button

2014-09-09 Thread Julie Pichon
Opening a task on Nova based on the last couple of comments, also
tweaked the description accordingly. Maybe someone more familiar with
Nova can shed more light on what is needed here.

** Also affects: nova
   Importance: Undecided
   Status: New

** Summary changed:

- Enable "Stop Instance" button
+ Enable ACPI call for Stop/Terminate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346741

Title:
  Enable ACPI call for Stop/Terminate

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Add a "Stop Instance" button to Horizon, so it will be possible to
  shutdown a instance using ACPI call (like running "virsh shutdown
  instance-00XX" directly at the Compute Node.

  Currently, the Horizon button "Shut Off Instance" just destroy it.

  I'm not seeing a way to gracefully shutdown a instance from Horizon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1346741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1163562] Re: Child processes have no way to use atexit functionality

2014-09-09 Thread Joe Gordon
** Changed in: nova
   Status: Triaged => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1163562

Title:
  Child processes have no way to use atexit functionality

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Currently in nova, when children are spawned via the forking model
  there exists no way for said child process to have its own set of
  atexit handlers (due to the usage of os._exit). Say each process has a
  local log handler and said log handler needs to be flushed before the
  child process exits. It would be useful to let atexit be used or an
  alternative mechanism be provided for children to invoke cleanup tasks
  before they are shutdown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1163562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362596] Re: API Test Failure: tempest.api.compute.servers.test_server_actions.[ServerActionsTestJSON, ServerActionsTestXML]

2014-09-09 Thread Joe Gordon
It looks like around the time this error occured (8:48) the VM was
wedged, so I think this may actually be an issue in the infra we are
running the job on.

** Changed in: nova
   Status: New => Incomplete

** Also affects: openstack-ci
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362596

Title:
  API Test Failure:
  
tempest.api.compute.servers.test_server_actions.[ServerActionsTestJSON,ServerActionsTestXML]

Status in OpenStack Compute (Nova):
  Incomplete
Status in OpenStack Core Infrastructure:
  New

Bug description:
  See here: http://logs.openstack.org/99/115799/7/gate/gate-tempest-
  dsvm-neutron-pg/6394ec4/console.html

  2014-08-28 09:30:46.102 | ==
  2014-08-28 09:30:46.103 | Failed 2 tests - output below:
  2014-08-28 09:30:46.103 | ==
  2014-08-28 09:30:46.103 | 
  2014-08-28 09:30:46.103 | setUpClass 
(tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON)
  2014-08-28 09:30:46.104 | 
--
  2014-08-28 09:30:46.104 | 
  2014-08-28 09:30:46.104 | Captured traceback:
  2014-08-28 09:30:46.105 | ~~~
  2014-08-28 09:30:46.105 | Traceback (most recent call last):
  2014-08-28 09:30:46.105 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 59, in setUpClass
  2014-08-28 09:30:46.106 | cls.server_id = cls.rebuild_server(None)
  2014-08-28 09:30:46.106 |   File "tempest/api/compute/base.py", line 354, 
in rebuild_server
  2014-08-28 09:30:46.106 | resp, server = 
cls.create_test_server(wait_until='ACTIVE', **kwargs)
  2014-08-28 09:30:46.106 |   File "tempest/api/compute/base.py", line 254, 
in create_test_server
  2014-08-28 09:30:46.107 | raise ex
  2014-08-28 09:30:46.107 | BuildErrorException: Server 
de02306b-a65c-47ef-86ee-64afc61794e3 failed to build and is in ERROR status
  2014-08-28 09:30:46.107 | Details: {u'message': u'No valid host was 
found. ', u'code': 500, u'created': u'2014-08-28T08:48:40Z'}
  2014-08-28 09:30:46.108 | 
  2014-08-28 09:30:46.108 | 
  2014-08-28 09:30:46.108 | setUpClass 
(tempest.api.compute.servers.test_server_actions.ServerActionsTestXML)
  2014-08-28 09:30:46.109 | 
-
  2014-08-28 09:30:46.109 | 
  2014-08-28 09:30:46.109 | Captured traceback:
  2014-08-28 09:30:46.110 | ~~~
  2014-08-28 09:30:46.110 | Traceback (most recent call last):
  2014-08-28 09:30:46.110 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 59, in setUpClass
  2014-08-28 09:30:46.110 | cls.server_id = cls.rebuild_server(None)
  2014-08-28 09:30:46.111 |   File "tempest/api/compute/base.py", line 354, 
in rebuild_server
  2014-08-28 09:30:46.111 | resp, server = 
cls.create_test_server(wait_until='ACTIVE', **kwargs)
  2014-08-28 09:30:46.111 |   File "tempest/api/compute/base.py", line 254, 
in create_test_server
  2014-08-28 09:30:46.112 | raise ex
  2014-08-28 09:30:46.112 | BuildErrorException: Server 
e966b5a5-6d8b-4fa2-8b90-cd28e5445a2a failed to build and is in ERROR status
  2014-08-28 09:30:46.112 | Details: {'message': 'No valid host was found. 
', 'code': '500', 'details': 'None', 'created': '2014-08-28T08:48:44Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1362596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367480] [NEW] Add test for grant CRUD on test_backend

2014-09-09 Thread Samuel de Medeiros Queiroz
Public bug reported:

The fact of not having tests for this may cause some bugs.
For example, in KVS backend we call a non existent method and it would be 
avoided if we had such tests. [1]

[1]
https://github.com/openstack/keystone/blame/master/keystone/assignment/backends/kvs.py#L512

** Affects: keystone
 Importance: Undecided
 Assignee: Samuel de Medeiros Queiroz (samuel-z)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Samuel de Medeiros Queiroz (samuel-z)

** Description changed:

  The fact of not having tests for this may cause some bugs.
- For example, in KVS backend we call an non existent method and it would be 
avoided if we had such tests. [1]
+ For example, in KVS backend we call a non existent method and it would be 
avoided if we had such tests. [1]
  
  [1]
  
https://github.com/openstack/keystone/blame/master/keystone/assignment/backends/kvs.py#L512

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1367480

Title:
  Add test for grant CRUD on test_backend

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The fact of not having tests for this may cause some bugs.
  For example, in KVS backend we call a non existent method and it would be 
avoided if we had such tests. [1]

  [1]
  
https://github.com/openstack/keystone/blame/master/keystone/assignment/backends/kvs.py#L512

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1367480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1328067] Re: Token with "placeholder" ID issued

2014-09-09 Thread Morgan Fainberg
This would cause a breakage in the backwards compatibility of the
Keystone API. The V2 token requires an id, however, under PKI tokens the
id in the token body is part of the signing/hashing that is used to
generate the token id. This means that we cannot have an accurate ID in
the v2 token body.

When using PKI tokens do not use the id encoded in the token body.

** Changed in: keystone
   Status: In Progress => Won't Fix

** Changed in: keystone
Milestone: juno-rc1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1328067

Title:
  Token with "placeholder" ID issued

Status in OpenStack Identity (Keystone):
  Won't Fix
Status in OpenStack Identity  (Keystone) Middleware:
  New
Status in Python client library for Keystone:
  Fix Committed

Bug description:
  We're seeing test failures, where it seems that an invalid token is
  issued, with the ID of "placeholder"

  http://logs.openstack.org/69/97569/2/check/check-tempest-dsvm-
  full/565d328/logs/screen-h-eng.txt.gz

  See context_auth_token_info which is being passed using the auth_token
  keystone.token_info request environment variable (ref
  https://review.openstack.org/#/c/97568/ which is the previous patch in
  the chain from the log referenced above).

  It seems like auth_token is getting a token, but there's some sort of
  race in the backend which prevents an actual token being stored?
  Trying to use "placeholder" as a token ID doesn't work, so it seems
  like this default assigned in the controller is passed back to
  auth_token, which treats it as a valid token, even though it's not.

  
https://github.com/openstack/keystone/blob/master/keystone/token/controllers.py#L121

  I'm not sure how to debug this further, as I can't reproduce this
  problem locally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1328067/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367476] [NEW] update to bootstrap 3 icons

2014-09-09 Thread Cindy Lu
Public bug reported:

There are still some instances of "" for icons.

We should be using glyphicons instead.

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367476

Title:
  update to bootstrap 3 icons

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There are still some instances of "" for icons.

  We should be using glyphicons instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316926] Re: failed to reach ACTIVE status and task state "None" within the required time (196 s). Current status: BUILD. Current task state: spawning.

2014-09-09 Thread Joe Gordon
It looks like this may be a libvirt bug, if so I am fairly sure we have
a duplicate of this somewhere

** Changed in: nova
   Status: New => Invalid

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316926

Title:
  failed to reach ACTIVE status and task state "None" within the
  required time (196 s). Current status: BUILD. Current task state:
  spawning.

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  Running test_reset_network_inject_network_info see a failure where
  unable to reach ACTIVE state.

  http://logs.openstack.org/71/91171/11/gate/gate-tempest-dsvm-
  full/8cf415d/console.html

  2014-05-07 03:33:22.910 | {3} 
tempest.api.compute.v3.admin.test_servers.ServersAdminV3Test.test_reset_network_inject_network_info
 [196.920138s] ... FAILED
  2014-05-07 03:33:22.910 | 
  2014-05-07 03:33:22.910 | Captured traceback:
  2014-05-07 03:33:22.910 | ~~~
  2014-05-07 03:33:22.910 | Traceback (most recent call last):
  2014-05-07 03:33:22.910 |   File 
"tempest/api/compute/v3/admin/test_servers.py", line 170, in 
test_reset_network_inject_network_info
  2014-05-07 03:33:22.910 | resp, server = 
self.create_test_server(wait_until='ACTIVE')
  2014-05-07 03:33:22.910 |   File "tempest/api/compute/base.py", line 233, 
in create_test_server
  2014-05-07 03:33:22.910 | raise ex
  2014-05-07 03:33:22.910 | TimeoutException: Request timed out
  2014-05-07 03:33:22.910 | Details: Server 
4491ab2f-2228-4d3f-b364-77d0276c18da failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: BUILD. Current 
task state: spawning.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1316926/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352092] Re: tempest.scenario.test_large_ops.TestLargeOpsScenario.test_large_ops_scenario_3

2014-09-09 Thread Sean Dague
Honestly, this isn't a bug report, it's a stack trace. Stack traces are
easy to come by, but we need something more to make it a bug we're
tracking.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352092

Title:
  tempest.scenario.test_large_ops.TestLargeOpsScenario.test_large_ops_scenario_3

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  2014-08-04 01:41:35.047 | Traceback (most recent call last):
  2014-08-04 01:41:35.047 |   File "tempest/scenario/manager.py", line 175, in 
delete_wrapper
  2014-08-04 01:41:35.047 | thing.delete()
  2014-08-04 01:41:35.048 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/security_groups.py", line 31, 
in delete
  2014-08-04 01:41:35.048 | self.manager.delete(self)
  2014-08-04 01:41:35.048 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/security_groups.py", line 71, 
in delete
  2014-08-04 01:41:35.049 | self._delete('/os-security-groups/%s' % 
base.getid(group))
  2014-08-04 01:41:35.049 |   File 
"/opt/stack/new/python-novaclient/novaclient/base.py", line 109, in _delete
  2014-08-04 01:41:35.049 | _resp, _body = self.api.client.delete(url)
  2014-08-04 01:41:35.050 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 538, in delete
  2014-08-04 01:41:35.050 | return self._cs_request(url, 'DELETE', **kwargs)
  2014-08-04 01:41:35.050 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 507, in 
_cs_request
  2014-08-04 01:41:35.050 | resp, body = self._time_request(url, method, 
**kwargs)
  2014-08-04 01:42:12.628 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 481, in 
_time_request
  2014-08-04 01:42:12.629 | resp, body = self.request(url, method, **kwargs)
  2014-08-04 01:42:12.629 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 475, in request
  2014-08-04 01:42:12.629 | raise exceptions.from_response(resp, body, url, 
method)
  2014-08-04 01:42:12.630 | BadRequest: Security group is still in use (HTTP 
400) (Request-ID: req-cb8e9344-57e7-4ad8-962c-c527934eae59)
  2014-08-04 01:42:12.630 | 
==
  2014-08-04 01:42:12.630 | FAIL: 
tempest.scenario.test_large_ops.TestLargeOpsScenario.test_large_ops_scenario_3[compute,image]
  2014-08-04 01:42:12.631 | tags: worker-0
  2014-08-04 01:42:12.631 | 
--
  2014-08-04 01:42:12.631 | Empty attachments:
  2014-08-04 01:42:12.631 |   stderr
  2014-08-04 01:42:12.632 |   stdout

  detail: http://logs.openstack.org/38/38/3/check/gate-tempest-dsvm-
  large-ops/c0005d5/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363319] Re: Typo in config help for token and revocation events caching

2014-09-09 Thread Dolph Mathews
This was already fixed somewhere without being tracked correctly.

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363319

Title:
  Typo in config help for token and revocation events caching

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  
  Typo in config help for 'token' and 'revocation events'
  "cacheing" should be changed to "caching"

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367007] Re: Ironic driver requires extra_specs

2014-09-09 Thread Thierry Carrez
** No longer affects: nova/juno

** Changed in: nova
   Importance: High => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367007

Title:
  Ironic driver requires extra_specs

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Comments on review https://review.openstack.org/#/c/111429/ suggested
  that the Ironic driver should use

flavor = instance.get_flavor()

  instead of

flavor = flavor_obj.Flavor.get_by_id(context,
  instance['instance_type_id'])

  During the crunch to land things before feature freeze, these were
  integrated in the proposal to the Nova tree prior to being landed in
  the Ironic tree (the only place where they would have been tested).
  These changes actually broke the driver, since it requires access to
  flavor['extra_specs'] -- which is not present in the instance's cached
  copy of the flavor.

  This problem was discovered when attempting to update the devstack
  config and begin testing with the driver from the Nova tree (rather
  than the copy of the driver in the Ironic tree). That patch is here:

  https://review.openstack.org/#/c/119844/

  The error being encountered can be seen both on the devstack patch
  (eg, in the Nova code)

  http://logs.openstack.org/44/119844/2/check/check-tempest-dsvm-
  virtual-ironic-nv/ce443f8/logs/screen-n-cpu.txt.gz

  and in the back-port of the same code to Ironic here:

  http://logs.openstack.org/65/119165/3/check/check-tempest-dsvm-
  virtual-
  ironic/c161a89/logs/screen-n-cpu.txt.gz#_2014-09-08_08_41_06_821

  
  ==
  Proposed fix
  ==

  Fetch flavor['extra_specs'] on demand, when needed by the Ironic
  driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1222852] Re: whitebox tests should be converted to nova unit tests

2014-09-09 Thread Joe Gordon
sdague> mark it as fix released or invalid


** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1222852

Title:
  whitebox tests should be converted to nova unit tests

Status in OpenStack Compute (Nova):
  Fix Released
Status in Tempest:
  Fix Released

Bug description:
  The whitebox tests in tempest are actually basically state change
  tests for nova, which could be better done as nova unit tests. This is
  especially true because changing the database in tempest is somewhat a
  verboten thing. It would also solve the nightly fails on tempest-all
  by removing them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1222852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291605] Re: unit test test_create_instance_with_networks_disabled race fail

2014-09-09 Thread Joe Gordon
Looks like there are no hits for the gate queue anymore. And the current
hits appear to be for jobs that have a lot of failures. Looks like this
isn't valid anmore

** Changed in: nova
   Status: Triaged => Won't Fix

** Changed in: nova
   Status: Won't Fix => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291605

Title:
  unit test test_create_instance_with_networks_disabled race fail

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://logs.openstack.org/22/43822/32/check/gate-nova-
  python27/e33dc5b/console.html

  message:"FAIL:
  
nova.tests.api.openstack.compute.plugins.v3.test_servers.ServersControllerCreateTest.test_create_instance_with_networks_disabled"
  AND filename:"console.html" AND (build_name:"gate-nova-python27" OR
  build_name:"gate-nova-python26")

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5hcGkub3BlbnN0YWNrLmNvbXB1dGUucGx1Z2lucy52My50ZXN0X3NlcnZlcnMuU2VydmVyc0NvbnRyb2xsZXJDcmVhdGVUZXN0LnRlc3RfY3JlYXRlX2luc3RhbmNlX3dpdGhfbmV0d29ya3NfZGlzYWJsZWRcIiBBTkQgZmlsZW5hbWU6XCJjb25zb2xlLmh0bWxcIiBBTkQgKGJ1aWxkX25hbWU6XCJnYXRlLW5vdmEtcHl0aG9uMjdcIiBPUiBidWlsZF9uYW1lOlwiZ2F0ZS1ub3ZhLXB5dGhvbjI2XCIpIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk0NjU3NTUwMjU1fQ==

  12 hits in 7 days

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291605/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367442] [NEW] project > images fixed filter is missing icons

2014-09-09 Thread Cindy Lu
Public bug reported:

Project | Shared with Me | Public ==> should have icons beside each
filter choice as it did before bootstrap 3 update.

https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/_data_table_table_actions.html#L7

code:
{% if button.icon %}

With Bootstrap 3 update, we should be using... class="glyphicon
glyphicon-star" for icons instead of 

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367442

Title:
  project > images fixed filter is missing icons

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Project | Shared with Me | Public ==> should have icons beside each
  filter choice as it did before bootstrap 3 update.

  
https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/_data_table_table_actions.html#L7

  code:
  {% if button.icon %}

  With Bootstrap 3 update, we should be using... class="glyphicon
  glyphicon-star" for icons instead of 

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353962] Re: Test job failes with FixedIpLimitExceeded with nova network

2014-09-09 Thread Joe Gordon
Is it possible tempest is leaking servers?

** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353962

Title:
  Test job failes with FixedIpLimitExceeded with nova network

Status in OpenStack Compute (Nova):
  Incomplete
Status in Tempest:
  New

Bug description:
  VM creation failed due to a `shortage` in fixed IP.

  The fixed range is /24, tempest normally does not keeps up more than
  ~8 VM.

  message: "FixedIpLimitExceeded" AND filename:"logs/screen-n-net.txt"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkZpeGVkSXBMaW1pdEV4Y2VlZGVkXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1uZXQudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MDc0MTA0MzE3MTgsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  http://logs.openstack.org/23/112523/1/check/check-tempest-dsvm-
  postgres-
  full/acac6d9/logs/screen-n-cpu.txt.gz#_2014-08-07_09_42_18_481

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1353962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290642] Re: negative "rescue" tempest tests fail, cannot pause/rescue while instance is in vm_state building

2014-09-09 Thread Joe Gordon
Sounds like a tempest  bug not a nova bug.

** Also affects: tempest
   Importance: Undecided
   Status: New

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290642

Title:
  negative "rescue" tempest tests fail, cannot pause/rescue while
  instance is in vm_state building

Status in Tempest:
  New

Bug description:
  
  A tempest run failed several tests with errors like "Cannot 'pause' while 
instance is in vm_state building".  The other things tempest was trying to do 
is 'rescue' and 'attach_volume', and they all failed with the same error.

  Also, the error reported was "Conflict: An object with that identifier
  already exists" -- but there wasn't a conflict because an object with
  that identifier exists... it's a conflict with the state.

  Here's the tests that failed, in
  tempest.api.compute.servers.test_server_rescue_negative :

  .ServerRescueNegativeTestXML.test_rescue_paused_instance[gate,negative]
  .ServerRescueNegativeTestXML.test_rescued_vm_attach_volume[gate,negative]
  .ServerRescueNegativeTestXML.test_rescued_vm_detach_volume[gate,negative]

  n-cpu:

  2014-03-10 22:43:00.681 21160 INFO nova.virt.libvirt.driver [-] [instance: 
f2b64699-afc5-43f1-b3ef-868a92c08fd1] Instance spawned successfully.
  2014-03-10 22:43:00.684 DEBUG nova.compute.manager 
[req-f546c7d0-c320-4519-b37a-68dcd3a87783 ServerRescueNegativeTestXML-500126247 
ServerRescueNegativeTestXML-1828544031] [instance: 
f2b64699-afc5-43f1-b3ef-868a92c08fd1] Checking state _get_power_state 
/opt/stack/new/nova/nova/compute/manager.py:986

  n-api:

  2014-03-10 22:43:01.193 INFO nova.api.openstack.wsgi [req-9f3d2f24
  -44af-4e0c-8e44-3289944c19d3 ServerRescueNegativeTestXML-500126247
  ServerRescueNegativeTestXML-1828544031] HTTP exception thrown: Cannot
  'pause' while instance is in vm_state building

  Looks like the request doesn't get from api to cpu, since api thinks
  it's not ready.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1290642/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367432] [NEW] developer docs don't include metadata definitions concepts

2014-09-09 Thread Travis Tripp
Public bug reported:

The below site has the API docs, but there isn't any other mention of
the metadata definitions concepts.

http://docs.openstack.org/developer/glance/

** Affects: glance
 Importance: Undecided
 Assignee: Travis Tripp (travis-tripp)
 Status: New


** Tags: documentation

** Changed in: glance
 Assignee: (unassigned) => Travis Tripp (travis-tripp)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367432

Title:
  developer docs don't include metadata definitions concepts

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The below site has the API docs, but there isn't any other mention of
  the metadata definitions concepts.

  http://docs.openstack.org/developer/glance/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352092] Re: tempest.scenario.test_large_ops.TestLargeOpsScenario.test_large_ops_scenario_3

2014-09-09 Thread David Kranz
This happened once in the last week. So it is real but not common. I am
assuming this is a nova issue and not tempest. Please reopen if there is
evidence to the contrary.

** Changed in: tempest
   Status: New => Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1352092

Title:
  tempest.scenario.test_large_ops.TestLargeOpsScenario.test_large_ops_scenario_3

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  2014-08-04 01:41:35.047 | Traceback (most recent call last):
  2014-08-04 01:41:35.047 |   File "tempest/scenario/manager.py", line 175, in 
delete_wrapper
  2014-08-04 01:41:35.047 | thing.delete()
  2014-08-04 01:41:35.048 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/security_groups.py", line 31, 
in delete
  2014-08-04 01:41:35.048 | self.manager.delete(self)
  2014-08-04 01:41:35.048 |   File 
"/opt/stack/new/python-novaclient/novaclient/v1_1/security_groups.py", line 71, 
in delete
  2014-08-04 01:41:35.049 | self._delete('/os-security-groups/%s' % 
base.getid(group))
  2014-08-04 01:41:35.049 |   File 
"/opt/stack/new/python-novaclient/novaclient/base.py", line 109, in _delete
  2014-08-04 01:41:35.049 | _resp, _body = self.api.client.delete(url)
  2014-08-04 01:41:35.050 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 538, in delete
  2014-08-04 01:41:35.050 | return self._cs_request(url, 'DELETE', **kwargs)
  2014-08-04 01:41:35.050 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 507, in 
_cs_request
  2014-08-04 01:41:35.050 | resp, body = self._time_request(url, method, 
**kwargs)
  2014-08-04 01:42:12.628 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 481, in 
_time_request
  2014-08-04 01:42:12.629 | resp, body = self.request(url, method, **kwargs)
  2014-08-04 01:42:12.629 |   File 
"/opt/stack/new/python-novaclient/novaclient/client.py", line 475, in request
  2014-08-04 01:42:12.629 | raise exceptions.from_response(resp, body, url, 
method)
  2014-08-04 01:42:12.630 | BadRequest: Security group is still in use (HTTP 
400) (Request-ID: req-cb8e9344-57e7-4ad8-962c-c527934eae59)
  2014-08-04 01:42:12.630 | 
==
  2014-08-04 01:42:12.630 | FAIL: 
tempest.scenario.test_large_ops.TestLargeOpsScenario.test_large_ops_scenario_3[compute,image]
  2014-08-04 01:42:12.631 | tags: worker-0
  2014-08-04 01:42:12.631 | 
--
  2014-08-04 01:42:12.631 | Empty attachments:
  2014-08-04 01:42:12.631 |   stderr
  2014-08-04 01:42:12.632 |   stdout

  detail: http://logs.openstack.org/38/38/3/check/gate-tempest-dsvm-
  large-ops/c0005d5/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1352092/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363288] Re: Typo in keystone/common/controller.py

2014-09-09 Thread Dolph Mathews
A fix is gating that's not referencing a bug:
https://review.openstack.org/#/c/117902/

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363288

Title:
  Typo in keystone/common/controller.py

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  
https://github.com/openstack/keystone/blob/67b474f4ba3428eca97a4a6faaaba5951253f236/keystone/common/controller.py#L65
  I found type in controller.py file. "sane" was written instead of "same".

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363289] Re: Typos in base64utils.py file

2014-09-09 Thread Dolph Mathews
Fixed in
https://review.openstack.org/#/c/118913/1/keystone/common/base64utils.py

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1363289

Title:
  Typos in base64utils.py file

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  In 
https://github.com/openstack/keystone/blob/master/keystone/common/base64utils.py
  Typos:
  Line No : 143 "enconding" in place of "encoding"
  
https://github.com/openstack/keystone/blob/67b474f4ba3428eca97a4a6faaaba5951253f236/keystone/common/base64utils.py#L143
  Line No : 296 and 300 "multple" in place of "multiple"
  
https://github.com/openstack/keystone/blob/67b474f4ba3428eca97a4a6faaaba5951253f236/keystone/common/base64utils.py#L296
  Line No :313, 350 and 372 "whitepace" in place of "whitespace"
  
https://github.com/openstack/keystone/blob/67b474f4ba3428eca97a4a6faaaba5951253f236/keystone/common/base64utils.py#L313
  
https://github.com/openstack/keystone/blob/67b474f4ba3428eca97a4a6faaaba5951253f236/keystone/common/base64utils.py#L350
  
https://github.com/openstack/keystone/blob/67b474f4ba3428eca97a4a6faaaba5951253f236/keystone/common/base64utils.py#L372

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1363289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1358814] Re: test_s3_ec2_images fails with 500 error "Unkown error occurred"

2014-09-09 Thread David Kranz
Looks like some kind of nova issue.

** Changed in: tempest
   Status: New => Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1358814

Title:
  test_s3_ec2_images fails with 500 error "Unkown error occurred"

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  Testing a CI job I'm setting up to validate some Cinder Driver, I
  encountered the following issue while running tempest-dsvm-full:

  Full log at: http://r.ci.devsca.com:8080/job/periodic-scality-tempest-
  dsvm-full/12/console

  The relevant screen logs I could find (which contains either errors or 
tracebacks) are:
   - error.log (contains one single line from rabbitmq not being able to set 
the password)
   - screen-g-api.log
   - screen-g-reg.log
   - screen-tr-api.log

  All the screen logs are attached as a gzip archive file.

  Traceback of the internal server error:
  
tempest.thirdparty.boto.test_s3_ec2_images.S3ImagesTest.test_register_get_deregister_aki_image
  16:03:09 
--
  16:03:09 
  16:03:09 Captured traceback:
  16:03:09 ~~~
  16:03:09 Traceback (most recent call last):
  16:03:09   File "tempest/thirdparty/boto/test_s3_ec2_images.py", line 90, 
in test_register_get_deregister_aki_image
  16:03:09 self.assertImageStateWait(retrieved_image, "available")
  16:03:09   File "tempest/thirdparty/boto/test.py", line 354, in 
assertImageStateWait
  16:03:09 state = self.waitImageState(lfunction, wait_for)
  16:03:09   File "tempest/thirdparty/boto/test.py", line 339, in 
waitImageState
  16:03:09 self.valid_image_state)
  16:03:09   File "tempest/thirdparty/boto/test.py", line 333, in 
state_wait_gone
  16:03:09 state = wait.state_wait(lfunction, final_set, valid_set)
  16:03:09   File "tempest/thirdparty/boto/utils/wait.py", line 54, in 
state_wait
  16:03:09 status = lfunction()
  16:03:09   File "tempest/thirdparty/boto/test.py", line 316, in _status
  16:03:09 obj.update(validate=True)
  16:03:09   File 
"/usr/local/lib/python2.7/dist-packages/boto/ec2/image.py", line 160, in update
  16:03:09 rs = self.connection.get_all_images([self.id], 
dry_run=dry_run)
  16:03:09   File 
"/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 190, in 
get_all_images
  16:03:09 [('item', Image)], verb='POST')
  16:03:09   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1150, in 
get_list
  16:03:09 response = self.make_request(action, params, path, verb)
  16:03:09   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1096, in 
make_request
  16:03:09 return self._mexe(http_request)
  16:03:09   File 
"/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1009, in _mexe
  16:03:09 raise BotoServerError(response.status, response.reason, body)
  16:03:09 BotoServerError: BotoServerError: 500 Internal Server Error
  16:03:09 
  16:03:09 
HTTPInternalServerErrorUnknown 
error 
occurred.req-f2757f18-e039-49b1-b537-e48d0281abf0
  16:03:09 
  16:03:09 
  16:03:09 Captured pythonlogging:
  16:03:09 ~~~
  16:03:09 2014-08-19 16:02:33,467 30126 DEBUG
[keystoneclient.auth.identity.v2] Making authentication request to 
http://127.0.0.1:5000/v2.0/tokens
  16:03:09 2014-08-19 16:02:36,730 30126 INFO 
[tempest.thirdparty.boto.utils.wait] State transition "pending" ==> "failed" 1 
second
  16:03:09  


  
  Glance API Screen Log:
  2014-08-19 16:02:50.519 26241 DEBUG keystonemiddleware.auth_token [-] Storing 
token in cache store 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:1425
  2014-08-19 16:02:50.520 26241 DEBUG keystonemiddleware.auth_token [-] 
Received request from user: f28e3251f72347df9791ecd861c5caf4 with project_id : 
526acfaadbc042f8ac7c37d9ef7cffde and roles: _member_,Member  
_build_user_headers 
/usr/local/lib/python2.7/dist-packages/keystonemiddleware/auth_token.py:738
  2014-08-19 16:02:50.521 26241 DEBUG routes.middleware [-] Matched HEAD 
/images/db22d1d9-420b-41d2-8603-86c6fb9b5962 __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:100
  2014-08-19 16:02:50.522 26241 DEBUG routes.middleware [-] Route path: 
'/images/{id}', defaults: {'action': u'meta', 'controller': 
} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:102
  2014-08-19 16:02:50.522 26241 DEBUG routes.middleware [-] Match dict: 
{'action': u'meta', 'controller': , 'id': u'db22d1d9-420b-41d2-8603-86c6fb9b5962'} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
  2014-08-19 16:02:50.522 26241 DEBUG glance.common

[Yahoo-eng-team] [Bug 1358857] Re: test_load_balancer_basic mismatch error

2014-09-09 Thread David Kranz
This must be a race of some sort in tempest or neutron but I'm not sure
which.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1358857

Title:
  test_load_balancer_basic mismatch error

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  Gate failed check-tempest-dsvm-neutron-full on this (unrelated) patch change: 
  https://review.openstack.org/#/c/114693/

  http://logs.openstack.org/93/114693/1/check/check-tempest-dsvm-
  neutron-full/2755713/console.html

  
  2014-08-19 01:11:40.597 | ==
  2014-08-19 01:11:40.597 | Failed 1 tests - output below:
  2014-08-19 01:11:40.597 | ==
  2014-08-19 01:11:40.597 | 
  2014-08-19 01:11:40.597 | 
tempest.scenario.test_load_balancer_basic.TestLoadBalancerBasic.test_load_balancer_basic[compute,gate,network,smoke]
  2014-08-19 01:11:40.597 | 

  2014-08-19 01:11:40.597 | 
  2014-08-19 01:11:40.597 | Captured traceback:
  2014-08-19 01:11:40.597 | ~~~
  2014-08-19 01:11:40.598 | Traceback (most recent call last):
  2014-08-19 01:11:40.598 |   File "tempest/test.py", line 128, in wrapper
  2014-08-19 01:11:40.598 | return f(self, *func_args, **func_kwargs)
  2014-08-19 01:11:40.598 |   File 
"tempest/scenario/test_load_balancer_basic.py", line 297, in 
test_load_balancer_basic
  2014-08-19 01:11:40.598 | self._check_load_balancing()
  2014-08-19 01:11:40.598 |   File 
"tempest/scenario/test_load_balancer_basic.py", line 277, in 
_check_load_balancing
  2014-08-19 01:11:40.598 | self._send_requests(self.vip_ip, 
set(["server1", "server2"]))
  2014-08-19 01:11:40.598 |   File 
"tempest/scenario/test_load_balancer_basic.py", line 289, in _send_requests
  2014-08-19 01:11:40.598 | set(resp))
  2014-08-19 01:11:40.598 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 321, in 
assertEqual
  2014-08-19 01:11:40.598 | self.assertThat(observed, matcher, message)
  2014-08-19 01:11:40.599 |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 406, in 
assertThat
  2014-08-19 01:11:40.599 | raise mismatch_error
  2014-08-19 01:11:40.599 | MismatchError: set(['server1', 'server2']) != 
set(['server1'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1358857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359805] Re: 'Requested operation is not valid: domain is not running' from check-tempest-dsvm-neutron-full

2014-09-09 Thread David Kranz
*** This bug is a duplicate of bug 1260537 ***
https://bugs.launchpad.net/bugs/1260537

None is available I'm afraid. This is not a bug in tempest and this
ticket https://bugs.launchpad.net/tempest/+bug/1260537 is used to track
such things for whatever good it does.

** This bug has been marked a duplicate of bug 1260537
   Generic catchall bug for non triaged bugs where a server doesn't reach it's 
required state

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1359805

Title:
  'Requested operation is not valid: domain is not running' from check-
  tempest-dsvm-neutron-full

Status in OpenStack Neutron (virtual network service):
  Incomplete
Status in Tempest:
  New

Bug description:
  I received the following error from the check-tempest-dsvm-neutron-
  full test suite after submitting a nova patch:

  2014-08-21 14:11:25.059 | Captured traceback:
  2014-08-21 14:11:25.059 | ~~~
  2014-08-21 14:11:25.059 | Traceback (most recent call last):
  2014-08-21 14:11:25.059 |   File 
"tempest/api/compute/servers/test_server_actions.py", line 407, in 
test_suspend_resume_server
  2014-08-21 14:11:25.059 | 
self.client.wait_for_server_status(self.server_id, 'SUSPENDED')
  2014-08-21 14:11:25.059 |   File 
"tempest/services/compute/xml/servers_client.py", line 390, in 
wait_for_server_status
  2014-08-21 14:11:25.059 | raise_on_error=raise_on_error)
  2014-08-21 14:11:25.059 |   File "tempest/common/waiters.py", line 77, in 
wait_for_server_status
  2014-08-21 14:11:25.059 | server_id=server_id)
  2014-08-21 14:11:25.059 | BuildErrorException: Server 
a29ec7be-be83-4247-b7db-49bd4727d206 failed to build and is in ERROR status
  2014-08-21 14:11:25.059 | Details: {'message': 'Requested operation is 
not valid: domain is not running', 'code': '500', 'details': 'None', 'created': 
'2014-08-21T13:49:49Z'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1359805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1359995] Re: Tempest failed to delete user

2014-09-09 Thread David Kranz
Tempest does check for token expiry and no test should fail due to an
expired token. So this must be a keystone issue. I just looked at
another bug that got an unauthorized for one of the keystone tests with
no explanation which I also added keystone to the ticket
https://bugs.launchpad.net/keystone/+bug/1360504

** Changed in: tempest
   Status: New => Confirmed

** Changed in: tempest
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1359995

Title:
  Tempest failed to delete user

Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Identity (Keystone):
  Incomplete
Status in Tempest:
  Invalid

Bug description:
  
  check-tempest-dsvm-full failed on a keystone change. Here's the main log: 
http://logs.openstack.org/73/111573/4/check/check-tempest-dsvm-full/c5ce3bd/console.html

  The traceback shows:

  File "tempest/api/volume/test_volumes_list.py", line 80, in tearDownClass
  File "tempest/services/identity/json/identity_client.py", line 189, in 
delete_user
  Unauthorized: Unauthorized
  Details: {"error": {"message": "The request you have made requires 
authentication. (Disable debug mode to suppress these details.)", "code": 401, 
"title": "Unauthorized"}}

  So it's trying to delete the user and it gets unauthorized. Maybe the
  token was expired or marked invalid for some reason.

  There's something wrong here, but the keystone logs are useless for
  debugging now that it's running in Apache httpd. The logs don't have
  the request or result line, so you can't find where the request was
  being made.

  Also, Tempest should be able to handle the token being invalidated. It
  should just get a new token and try with that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1359995/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338844] Re: "FixedIpLimitExceeded: Maximum number of fixed ips exceeded" in tempest nova-network runs since 7/4

2014-09-09 Thread Matthew Treinish
** Changed in: tempest
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338844

Title:
  "FixedIpLimitExceeded: Maximum number of fixed ips exceeded" in
  tempest nova-network runs since 7/4

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiQnVpbGRBYm9ydEV4Y2VwdGlvbjogQnVpbGQgb2YgaW5zdGFuY2VcIiBBTkQgbWVzc2FnZTpcImFib3J0ZWQ6IEZhaWxlZCB0byBhbGxvY2F0ZSB0aGUgbmV0d29yayhzKSB3aXRoIGVycm9yIE1heGltdW0gbnVtYmVyIG9mIGZpeGVkIGlwcyBleGNlZWRlZCwgbm90IHJlc2NoZWR1bGluZy5cIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwNDc3OTE1MzY1MiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  Saw it here:

  http://logs.openstack.org/63/98563/5/check/check-tempest-dsvm-
  postgres-full/1472e7b/logs/screen-n-cpu.txt.gz?level=TRACE

  Looks like it's only in jobs using nova-network.

  Started on 7/4, 70 failures in 7 days, check and gate, multiple
  changes.

  Maybe related to https://review.openstack.org/104581.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1338844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294511] Re: test_aggregate_add_host_create_server_with_az fails with remote compute connection scenario

2014-09-09 Thread Matthew Treinish
** Changed in: tempest
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294511

Title:
  test_aggregate_add_host_create_server_with_az fails with remote
  compute connection scenario

Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  Problem:
  If it is not all in one environment, it is the controller node connecting 
with remote nova compute node. It fails to run tempest test case of 
test_aggregate_add_host_create_server_with_az when create server with az, the 
server created with error status as below.

  {"message": "NV-67B7376 No valid host was found. ", "code": 500,
  "details": "  File \"/usr/lib/python2.6/site-
  packages/nova/scheduler/filter_scheduler.py\", line 108, in
  schedule_run_instance

  Basic investigation:

  Since the code logic is to add the host of nova compute which is the
  same of controller node as default. Above scenario is the compute node
  is not the same with controller, it is remote nova compute node, it
  will show "No valid host was found".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294511/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254890] Re: "Timed out waiting for thing ... to become ACTIVE" causes tempest-dsvm-* failures

2014-09-09 Thread Matthew Treinish
** Changed in: tempest
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1254890

Title:
  "Timed out waiting for thing ... to become ACTIVE" causes tempest-
  dsvm-* failures

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Tempest:
  Fix Released

Bug description:
  Separate out bug from:
  https://bugs.launchpad.net/neutron/+bug/1250168/comments/23

  Logstash query from elastic-recheck:
  message:"Details: Timed out waiting for thing" AND message:"to become ACTIVE"

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRGV0YWlsczogVGltZWQgb3V0IHdhaXRpbmcgZm9yIHRoaW5nXCIgQU5EIG1lc3NhZ2U6XCJ0byBiZWNvbWUgQUNUSVZFXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTA4NDc1MzM2MDR9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1254890/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360504] Re: tempest.api.identity.admin.v3.test_credentials.CredentialsTestJSON create credential unauthorized

2014-09-09 Thread David Kranz
I don't see any hits for this in logstash. There is nothing unusual
about this test and it is surrounded by similar tests that pass. So
there must be some issue in keystone that is causing the admin
credentials to be rejected here.

** Changed in: tempest
   Status: New => Invalid

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1360504

Title:
  tempest.api.identity.admin.v3.test_credentials.CredentialsTestJSON
  create credential unauthorized

Status in OpenStack Identity (Keystone):
  New
Status in Tempest:
  Invalid

Bug description:
  The bug appeared in a gate-tempest-dsvm-neutron-full run:
  https://review.openstack.org/#/c/47/5

  Full console.log here: http://logs.openstack.org/47/47/5/gate
  /gate-tempest-dsvm-neutron-full/f21c917/console.html

  Stacktrace:
  2014-08-22 10:49:35.168 | 
tempest.api.identity.admin.v3.test_credentials.CredentialsTestJSON.test_credentials_create_get_update_delete[gate,smoke]
  2014-08-22 10:49:35.168 | 

  2014-08-22 10:49:35.168 | 
  2014-08-22 10:49:35.168 | Captured traceback:
  2014-08-22 10:49:35.168 | ~~~
  2014-08-22 10:49:35.168 | Traceback (most recent call last):
  2014-08-22 10:49:35.168 |   File 
"tempest/api/identity/admin/v3/test_credentials.py", line 62, in 
test_credentials_create_get_update_delete
  2014-08-22 10:49:35.168 | self.projects[0])
  2014-08-22 10:49:35.168 |   File 
"tempest/services/identity/v3/json/credentials_client.py", line 43, in 
create_credential
  2014-08-22 10:49:35.168 | resp, body = self.post('credentials', 
post_body)
  2014-08-22 10:49:35.168 |   File "tempest/common/rest_client.py", line 
219, in post
  2014-08-22 10:49:35.169 | return self.request('POST', url, 
extra_headers, headers, body)
  2014-08-22 10:49:35.169 |   File "tempest/common/rest_client.py", line 
431, in request
  2014-08-22 10:49:35.169 | resp, resp_body)
  2014-08-22 10:49:35.169 |   File "tempest/common/rest_client.py", line 
472, in _error_checker
  2014-08-22 10:49:35.169 | raise exceptions.Unauthorized(resp_body)
  2014-08-22 10:49:35.169 | Unauthorized: Unauthorized
  2014-08-22 10:49:35.169 | Details: {"error": {"message": "The request you 
have made requires authentication. (Disable debug mode to suppress these 
details.)", "code": 401, "title": "Unauthorized"}}
  2014-08-22 10:49:35.169 | 
  2014-08-22 10:49:35.169 | 
  2014-08-22 10:49:35.169 | Captured pythonlogging:
  2014-08-22 10:49:35.170 | ~~~
  2014-08-22 10:49:35.170 | 2014-08-22 10:31:28,001 5831 INFO 
[tempest.common.rest_client] Request 
(CredentialsTestJSON:test_credentials_create_get_update_delete): 401 POST 
http://127.0.0.1:35357/v3/credentials 0.065s

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1360504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1249319] Re: evacuate on ceph backed volume fails

2014-09-09 Thread Miroslav Anashkin
** Also affects: mos
   Importance: Undecided
   Status: New

** Tags added: customer-found

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249319

Title:
  evacuate on ceph backed volume fails

Status in Mirantis OpenStack:
  New
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  When using nova evacuate to move an instance from one compute host to
  another, the command silently fails. The issue seems to be that the
  rebuild process builds an incorrect libvirt.xml file that no longer
  correctly references the ceph volume.

  Specifically under the  section I see:

  

  where in the original libvirt.xml the file was:

  

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1249319/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361781] Re: bogus "Cannot 'rescue' while instance is in task_state powering-off"

2014-09-09 Thread David Kranz
This looks real though there is only one hit in the past 8 days and the
log is not available. The test immediately preceding this failure has an
addCLeanUp that unrescues:

def _unrescue(self, server_id):
resp, body = self.servers_client.unrescue_server(server_id)
self.assertEqual(202, resp.status)
self.servers_client.wait_for_server_status(server_id, 'ACTIVE')


The only possibility I can see is that somehow even after nova reports ACTIVE, 
the rescue code thinks the server is still in the powering-off state. I am 
going to call this a nova issue unless some one claims the above code is not 
sufficient to allow a follow-on call to rescue.

** Changed in: tempest
   Status: New => Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1361781

Title:
  bogus "Cannot 'rescue' while instance is in task_state powering-off"

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
  
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_associate_dissociate_floating_ip
  appears flaky.  For my change that fixes some documentation, sometimes
  this test succeeds and sometimes it fails.  For an example of a
  failure, see http://logs.openstack.org/85/109385/5/check/check-
  tempest-dsvm-full/ab9c111/

  Here is the traceback from the console.html in that case:

  2014-08-26 07:29:18.804 | ==
  2014-08-26 07:29:18.804 | Failed 1 tests - output below:
  2014-08-26 07:29:18.805 | ==
  2014-08-26 07:29:18.805 | 
  2014-08-26 07:29:18.805 | 
tempest.api.compute.servers.test_server_rescue.ServerRescueTestJSON.test_rescued_vm_associate_dissociate_floating_ip[gate]
  2014-08-26 07:29:18.805 | 
--
  2014-08-26 07:29:18.805 | 
  2014-08-26 07:29:18.805 | Captured traceback:
  2014-08-26 07:29:18.805 | ~~~
  2014-08-26 07:29:18.806 | Traceback (most recent call last):
  2014-08-26 07:29:18.806 |   File 
"tempest/api/compute/servers/test_server_rescue.py", line 95, in 
test_rescued_vm_associate_dissociate_floating_ip
  2014-08-26 07:29:18.806 | self.server_id, adminPass=self.password)
  2014-08-26 07:29:18.806 |   File 
"tempest/services/compute/json/servers_client.py", line 463, in rescue_server
  2014-08-26 07:29:18.806 | schema.rescue_server, **kwargs)
  2014-08-26 07:29:18.806 |   File 
"tempest/services/compute/json/servers_client.py", line 218, in action
  2014-08-26 07:29:18.806 | post_body)
  2014-08-26 07:29:18.807 |   File "tempest/common/rest_client.py", line 
219, in post
  2014-08-26 07:29:18.807 | return self.request('POST', url, 
extra_headers, headers, body)
  2014-08-26 07:29:18.807 |   File "tempest/common/rest_client.py", line 
431, in request
  2014-08-26 07:29:18.807 | resp, resp_body)
  2014-08-26 07:29:18.807 |   File "tempest/common/rest_client.py", line 
485, in _error_checker
  2014-08-26 07:29:18.807 | raise exceptions.Conflict(resp_body)
  2014-08-26 07:29:18.807 | Conflict: An object with that identifier 
already exists
  2014-08-26 07:29:18.808 | Details: {u'message': u"Cannot 'rescue' while 
instance is in task_state powering-off", u'code': 409}
  2014-08-26 07:29:18.808 | 
  2014-08-26 07:29:18.808 | 
  2014-08-26 07:29:18.808 | Captured pythonlogging:
  2014-08-26 07:29:18.808 | ~~~
  2014-08-26 07:29:18.808 | 2014-08-26 07:05:12,251 25737 INFO 
[tempest.common.rest_client] Request 
(ServerRescueTestJSON:test_rescued_vm_associate_dissociate_floating_ip): 409 
POST 
http://127.0.0.1:8774/v2/690b69920c1b4a4c8d2b376ba4cb6f80/servers/9a840d84-a381-42e5-81ef-8e7cd95c086e/action
 0.211s

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1361781/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367394] [NEW] [data processing] Allow username password to be optional for data sources/job binaries

2014-09-09 Thread Chad Roberts
Public bug reported:

*needed for juno*

The data processing system is changing from stored credentials for
containers to a trust-based authentication system. This requires the UI
to make username/password optional fields for data sources and job
binaries.

The corresponding Sahara project blueprint is:
https://blueprints.launchpad.net/sahara/+spec/edp-swift-trust-
authentication

** Affects: horizon
 Importance: Undecided
 Assignee: Chad Roberts (croberts)
 Status: In Progress


** Tags: sahara

** Changed in: horizon
 Assignee: (unassigned) => Chad Roberts (croberts)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367394

Title:
  [data processing] Allow username password to be optional for data
  sources/job binaries

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  *needed for juno*

  The data processing system is changing from stored credentials for
  containers to a trust-based authentication system. This requires the
  UI to make username/password optional fields for data sources and job
  binaries.

  The corresponding Sahara project blueprint is:
  https://blueprints.launchpad.net/sahara/+spec/edp-swift-trust-
  authentication

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367391] [NEW] ML2 DVR port binding implementation unnecessarily duplicates schema and logic

2014-09-09 Thread Robert Kukura
Public bug reported:

Support for distributed port bindings was added to ML2 in order to
enable the same DVR port to be bound simultaneously on multiple hosts.
This was implemented by:

* Adding a new ml2_dvr_port_bindings table similar to the ml2_port_bindings 
table, but with the host column as part of the primary key.
* Adding a new DvrPortContext class the overrides several functions in 
PortContext.
* Adding DVR-specific internal functions to Ml2Plugin, 
_process_dvr_port_binding and _commit_dvr_port_binding, that are modified 
copies of existing functions.
* In about 8 places, making code conditional on "port['device_owner'] == 
const.DEVICE_OWNER_DVR_INTERFACE" to handle DVR ports using the above models, 
classes and functions instead of the normal ones.

This duplication of schema and code adds significant technical debt to
the ML2 plugin implementation, requiring developers and reviewers to
evaluate for all changes whether they need to apply to both the normal
and DVR-specific copies. In addition, copied code is certain to diverge
over time, making the effort to keep the copies as synchronized as
possible become more and more difficult.

This unnecessary duplication of schema and code should be significantly
reduced or completely eliminated by treating a normal non-distributed
port as a special case of a distributed port that happens to only bind
on a single host.

The schema would be unified by replacing the existing ml2_port_bindings
and ml2_dvr_port_bindings tables with two new non-overlapping tables.
One would contain the port state that is the same for all hosts on which
the port binds, including the values of the binding:host,
binding:vnic_type, and binding:profile attributes. The other would
contain the port state that differs among host-specific bindings, such
as the binding:vif_type and binding:vif_details attribute values, and
the bound driver and segment (until these two move to a separate table
for hierarchical port binding).

Also, the basic idea of distributed port bindings is not specific to
DVR, and could be used for DHCP and other services, so the schema and
code could be made more generic as the distributed and normal schema and
code are unified.

** Affects: neutron
 Importance: High
 Assignee: Robert Kukura (rkukura)
 Status: New


** Tags: ml2

** Description changed:

  Support for distributed port bindings was added to ML2 in order to
  enable the same DVR port to be bound simultaneously on multiple hosts.
  This was implemented by:
  
  * Adding a new ml2_dvr_port_bindings table similar to the ml2_port_bindings 
table, but with the host column as part of the primary key.
  * Adding a new DvrPortContext class the overrides several functions in 
PortContext.
  * Adding DVR-specific internal functions to Ml2Plugin, 
_process_dvr_port_binding and _commit_dvr_port_binding, that are modified 
copies of existing functions.
  * In about 8 places, making code conditional on "port['device_owner'] == 
const.DEVICE_OWNER_DVR_INTERFACE" to handle DVR ports using the above models, 
classes and functions instead of the normal ones.
  
  This duplication of schema and code adds significant technical debt to
  the ML2 plugin implementation, requiring developers and reviewers to
  evaluate for all changes whether they need to apply to both the normal
  and DVR-specific copies. In addition, copied code is certain to diverge
- over time, making the the effort to keep the copies as synchronized as
+ over time, making the effort to keep the copies as synchronized as
  possible become more and more difficult.
  
  This unnecessary duplication of schema and code should be significantly
  reduced or completely eliminated by treating a normal non-distributed
  port as a special case of a distributed port that happens to only bind
  on a single host.
  
  The schema would be unified by replacing the existing ml2_port_bindings
  and ml2_dvr_port_bindings tables with two new tables. One would contain
  the port state that is the same for all hosts on which the port binds,
  including the values of the binding:host, binding:vnic_type, and
  binding:profile attributes. The other would contain the port state that
  differs among host-specific bindings, such as the binding:vif_type and
  binding:vif_details attribute values, and the bound driver and segment
  (until these two move to a separate table for hierarchical port
  binding).
  
  Also, the basic idea of distributed port bindings is not specific to
  DVR, and could be used for DHCP and other services, so the schema and
  code could be made more generic as the distributed and normal schema and
  code are unified.

** Description changed:

  Support for distributed port bindings was added to ML2 in order to
  enable the same DVR port to be bound simultaneously on multiple hosts.
  This was implemented by:
  
  * Adding a new ml2_dvr_port_bindings table similar to the ml2_port_bindings 
table, but with the host column as part of the pr

[Yahoo-eng-team] [Bug 1118194] Re: Security Documentation for Horizon

2014-09-09 Thread Andreas Jaeger
** Changed in: openstack-manuals
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1118194

Title:
  Security Documentation for Horizon

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Manuals:
  Fix Released

Bug description:
  Horizon's documentation doesn't contain much in terms of guidelines
  for securing a deployment.

  The following should be documented somewhere:

  When implementing Horizon for public usage, with the website served
  through HTTPS, the following recommendations apply.

  In the Apache global configuration ensure that the following directive
  is configured to prevent the server from sharing its name, version and
  any other information that could be used for an attack:

  ServerSignature Off

  In the Apache global configuration ensure that the following directive
  is configured to prevent cross-site tracing [1]:

  TraceEnable Off

  In the Apache virtual host configuration:

  1) Ensure that the "Indexes" option is not included in the Options directive.
  2) Protect the server from BEAST attacks [2] by implementing the following 
options:

    SSLHonorCipherOrder on
    SSLProtocol -ALL +SSLv3 +TLSv1
    SSLCipherSuite RC4-SHA:RC4:HIGH:!MD5:!aNULL:!EDH:!ADH:!AESGCM:!AES

  In local_settings.py, implement the following settings in order to
  help protect the cookies from cross-site scripting [3]:

  CSRF_COOKIE_SECURE = True
  SESSION_COOKIE_SECURE = True
  SESSION_COOKIE_HTTPONLY = True

  Note that the CSRF_COOKIE_SECURE option is only available from Django
  1.4 and will therefore not work for most packaged Essex deployments.

  Also, since a recent patch [4], you can disable browser autocompletion
  [5] for the authentication form by changing the
  'password_autocomplete' attribute to 'off' in horizon/conf/default.py.

  [1] http://www.kb.cert.org/vuls/id/867593
  [2] http://en.wikipedia.org/wiki/Transport_Layer_Security#BEAST_attack
  [3] https://www.owasp.org/index.php/HttpOnly
  [4] https://review.openstack.org/21349
  [5] 
https://wiki.mozilla.org/The_autocomplete_attribute_and_web_documents_using_XHTML

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1118194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1283599] Re: TestNetworkBasicOps occasionally fails to delete resources

2014-09-09 Thread David Kranz
This is still hitting persistently but not that often. I think this is
more likely a bug in neutron than in tempest so marking accordingly.
Please reopen in tempest if more evidence appears.

** Changed in: neutron
   Status: New => Confirmed

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1283599

Title:
  TestNetworkBasicOps occasionally fails to delete resources

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in Tempest:
  Invalid

Bug description:
  Network, Subnet and security group appear to be in use when they are deleted.
  Observed in: 
http://logs.openstack.org/84/75284/3/check/check-tempest-dsvm-neutron-full/d792a7a/logs

  Observed so far with neutron full job only.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1283599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367383] [NEW] VolumeViewTests has nagging 404 errors

2014-09-09 Thread Gloria Gu
Public bug reported:

Just merged from master, now if run_tests.sh from horizon, will have 2
nagging 404 errors. Test passed though. They are from

./run_tests.sh
openstack_dashboard.dashboards.project.volumes.volumes.tests:VolumeViewTests

Not Found: Not Found (HTTP 404)
Traceback (most recent call last):
  File 
"/home/stack/horizon/openstack_dashboard/dashboards/project/volumes/volumes/tables.py",
 line 181, in allowed
limits = api.cinder.tenant_absolute_limits(request)
  File "/home/stack/horizon/horizon/utils/memoized.py", line 90, in wrapped
value = cache[key] = func(*args, **kwargs)
  File "/home/stack/horizon/openstack_dashboard/api/cinder.py", line 439, in 
tenant_absolute_limits
limits = cinderclient(request).limits.get().absolute
  File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/v1/limits.py",
 line 92, in get
return self._get("/limits", "limits")
  File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/base.py",
 line 145, in _get
resp, body = self.api.client.get(url)
  File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/client.py",
 line 220, in get
return self._cs_request(url, 'GET', **kwargs)
  File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/client.py",
 line 187, in _cs_request
**kwargs)
  File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/client.py",
 line 170, in request
raise exceptions.from_response(resp, body)
NotFound: Not Found (HTTP 404)
..Not Found: Not Found (HTTP 404)
Traceback (most recent call last):
  File 
"/home/stack/horizon/openstack_dashboard/dashboards/project/volumes/volumes/tables.py",
 line 181, in allowed
limits = api.cinder.tenant_absolute_limits(request)
  File "/home/stack/horizon/horizon/utils/memoized.py", line 90, in wrapped
value = cache[key] = func(*args, **kwargs)
  File "/home/stack/horizon/openstack_dashboard/api/cinder.py", line 439, in 
tenant_absolute_limits
limits = cinderclient(request).limits.get().absolute
  File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/v1/limits.py",
 line 92, in get
return self._get("/limits", "limits")
  File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/base.py",
 line 145, in _get
resp, body = self.api.client.get(url)
  File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/client.py",
 line 220, in get
return self._cs_request(url, 'GET', **kwargs)
  File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/client.py",
 line 187, in _cs_request
**kwargs)
  File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/client.py",
 line 170, in request
raise exceptions.from_response(resp, body)
NotFound: Not Found (HTTP 404)

** Affects: horizon
 Importance: Undecided
 Assignee: Gloria Gu (gloria-gu)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Gloria Gu (gloria-gu)

** Description changed:

- Just merged from main, now if run_tests.sh from horizon, will have 2
+ Just merged from master, now if run_tests.sh from horizon, will have 2
  nagging 404 errors. Test passed though. They are from
  
  ./run_tests.sh
  openstack_dashboard.dashboards.project.volumes.volumes.tests:VolumeViewTests
  
- 
  Not Found: Not Found (HTTP 404)
  Traceback (most recent call last):
-   File 
"/home/stack/horizon/openstack_dashboard/dashboards/project/volumes/volumes/tables.py",
 line 181, in allowed
- limits = api.cinder.tenant_absolute_limits(request)
-   File "/home/stack/horizon/horizon/utils/memoized.py", line 90, in wrapped
- value = cache[key] = func(*args, **kwargs)
-   File "/home/stack/horizon/openstack_dashboard/api/cinder.py", line 439, in 
tenant_absolute_limits
- limits = cinderclient(request).limits.get().absolute
-   File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/v1/limits.py",
 line 92, in get
- return self._get("/limits", "limits")
-   File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/base.py",
 line 145, in _get
- resp, body = self.api.client.get(url)
-   File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/client.py",
 line 220, in get
- return self._cs_request(url, 'GET', **kwargs)
-   File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/client.py",
 line 187, in _cs_request
- **kwargs)
-   File 
"/home/stack/horizon/.venv/local/lib/python2.7/site-packages/cinderclient/client.py",
 line 170, in request
- raise exceptions.from_response(resp, body)
+   File 
"/home/stack/horizon/openstack_dashboard/dashboards/project/volumes/volumes/tables.py",
 line 181, in allowed
+ limits = api.cinder.tenant_absolute_limits(request)
+   File "/home/stack/horizon/horizon/ut

[Yahoo-eng-team] [Bug 1268274] Re: KeyError in _get_server_ip

2014-09-09 Thread Matthew Treinish
** Changed in: tempest
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1268274

Title:
  KeyError in _get_server_ip

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in Tempest:
  Fix Released

Bug description:
  In gate-tempest-dsvm-neutron test:

  2014-01-11 16:39:43.311 | Traceback (most recent call last):
  2014-01-11 16:39:43.311 |   File 
"tempest/scenario/test_cross_tenant_connectivity.py", line 482, in 
test_cross_tenant_traffic
  2014-01-11 16:39:43.311 | self._test_in_tenant_block(self.demo_tenant)
  2014-01-11 16:39:43.311 |   File 
"tempest/scenario/test_cross_tenant_connectivity.py", line 380, in 
_test_in_tenant_block
  2014-01-11 16:39:43.311 | ip=self._get_server_ip(server),
  2014-01-11 16:39:43.311 |   File 
"tempest/scenario/test_cross_tenant_connectivity.py", line 326, in 
_get_server_ip
  2014-01-11 16:39:43.311 | return server.networks[network_name][0]
  2014-01-11 16:39:43.312 | KeyError: u'network-smoke--tempest-1504528870'

  http://logs.openstack.org/39/65039/4/gate/gate-tempest-dsvm-
  neutron/cb3457d/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1268274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302482] Re: VMware driver: Nova boot fails when there is a datacenter with no datastore associated with it.

2014-09-09 Thread Alan Pevec
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Changed in: nova/havana
   Status: New => In Progress

** Changed in: nova/havana
   Importance: Undecided => Low

** Changed in: nova/havana
 Assignee: (unassigned) => Gary Kotton (garyk)

** Changed in: nova/havana
Milestone: None => 2013.2.4

** Tags removed: havana-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1302482

Title:
  VMware driver: Nova boot fails when there is a datacenter with no
  datastore associated with it.

Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  In Progress
Status in OpenStack Compute (nova) icehouse series:
  Fix Released

Bug description:
  when there is a Datacenter in the vCenter with no datastore associated
  with it, nova boot fails  even though there are data-centers
  configured properly.

  The log error trace

  Error from last host: devstack (node domain-c162(Demo-1)): [u'Traceback (most 
recent call last):\n', u'  File "/opt/stack/nova/nova/compute/manager.py", line 
1322, in _build_instance\nset_access_ip=set_access_ip)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 399, in decorated_function\n
return function(self, context, *args, **kwargs)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 1734, in _spawn\n
LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', u'  File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__\n
six.reraise(self.type_, self.value, self.tb)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 1731, in _spawn\n
block_device_info)\n', u'  File 
"/opt/stack/nova/nova/virt/vmwareapi/driver.py", line 619, in spawn\n
admin_password, network_info, block_device_info)\n', u'  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 211, in spawn\ndc_info 
= self.get_datacenter_re
 f_and_name(data_store_ref)\n', u'  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 1715, in 
get_datacenter_ref_and_name\n
self._update_datacenter_cache_from_objects(dcs)\n', u'  File 
"/opt/stack/nova/nova/virt/vmwareapi/vmops.py", line 1693, in 
_update_datacenter_cache_from_objects\ndatastore_refs = 
p.val.ManagedObjectReference\n', u"AttributeError: 'Text' object has no 
attribute 'ManagedObjectReference'\n"]
  2014-04-04 03:05:41.629 WARNING nova.scheduler.driver 
[req-cc690e5a-2bf3-4566-a697-30ca882df815 nova service] [instance: 
f0abb23a-943a-475d-ac63-69d2563362cb] Setting instance to ERROR state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1302482/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366909] Re: disk image create creates image size of atleast 1G

2014-09-09 Thread Thang Pham
Ok, diskimage-builder is not a nova component.  It is it's own project.

** Project changed: nova => diskimage-builder

** Changed in: diskimage-builder
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366909

Title:
  disk image create creates image size of atleast 1G

Status in Openstack disk image builder:
  New

Bug description:
  disk image create creates a qcow2 image with virtual size of atleast 1G. If 
DISK-IMAGE-SIZE is used in decimals,then, truncate command errors out as 
invalid number. This forces the image to be atleast 1G in size. Could this made 
in Mega bytes so that image less than 1 G can be created.
  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/diskimage-builder/+bug/1366909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367363] [NEW] Libvirt-lxc will leak nbd devices on instance shutdown

2014-09-09 Thread Andrew Melton
Public bug reported:

Shutting down a libvirt-lxc based instance will leak the nbd device.
This happens because _teardown_container will only be called when
libvirt domain's are running. During a shutdown, the domain is not
running at the time of the destroy. Thus, _teardown_container is never
called and the nbd device is never disconnected.

Steps to reproduce:
1) Create devstack using local.conf: 
https://gist.github.com/ramielrowe/6ae233dc2c2cd479498a
2) Create an instance
3) Perform ps ax |grep nbd on devstack host. Observe connected nbd device
4) Shutdown instance
5) Perform ps ax |grep nbd on devstack host. Observe connected nbd device
6) Delete instance
7) Perform ps ax |grep nbd on devstack host. Observe connected nbd device

Nova has now leaked the nbd device.

** Affects: nova
 Importance: Undecided
 Assignee: Andrew Melton (andrew-melton)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Andrew Melton (andrew-melton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367363

Title:
  Libvirt-lxc will leak nbd devices on instance shutdown

Status in OpenStack Compute (Nova):
  New

Bug description:
  Shutting down a libvirt-lxc based instance will leak the nbd device.
  This happens because _teardown_container will only be called when
  libvirt domain's are running. During a shutdown, the domain is not
  running at the time of the destroy. Thus, _teardown_container is never
  called and the nbd device is never disconnected.

  Steps to reproduce:
  1) Create devstack using local.conf: 
https://gist.github.com/ramielrowe/6ae233dc2c2cd479498a
  2) Create an instance
  3) Perform ps ax |grep nbd on devstack host. Observe connected nbd device
  4) Shutdown instance
  5) Perform ps ax |grep nbd on devstack host. Observe connected nbd device
  6) Delete instance
  7) Perform ps ax |grep nbd on devstack host. Observe connected nbd device

  Nova has now leaked the nbd device.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367363/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363515] Re: Cannot display page at all for logged-in users after upgrade django-openstack-auth to the latest

2014-09-09 Thread Akihiro Motoki
django-openstack-auth fix has been merged, so there is no need to hold
Horizon bug now.

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon
Milestone: juno-rc1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1363515

Title:
  Cannot display page at all for logged-in users after upgrade django-
  openstack-auth to the latest

Status in Django OpenStack Auth:
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  If user token object created by the old version of openstack-auth
  (before b6b52f2), the user cannot display pages or even log out and
  get the error 'Token' object has no attribute 'user_domain_name' after
  upgrading openstack-auth to commit b6b52f2 or newer version.

  A user who newly login after upgrading openstack-auth can use Horizon
  successfully.

  How to produce in devstack:

 run stack.sh once
 cd /opt/stack/django_openstack
 git checkout abfb9359d260ca437900e976edae2727cd5f18d7
 sudo pip install .
 sudo service apache2 restart

 login Horizon and keep logged in.

 git checkout master (or b6b52f29c070ddd544e3f5cb801cc246970b1815)
 sudo pip install .
 sudo service apache2 restart

 Access Horizon
 You will get the error 'Token' object has no attribute 'user_domain_name'

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1363515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352105] Re: can not get network info from metadata server

2014-09-09 Thread Sean Dague
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1352105

Title:
  can not get network info from metadata server

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  I want to use cloudinit to get the network info and write it into the network 
configuration file,
  but it failed because cloudinit didn't get the network info.
  In the vm, I used curl to test the metadata as below
  #curl http://169.254.169.254/openstack/latest/meta_data.json
  the reponse info didn't contain network info.
  See following code in nova/virt/netutils.py
  if subnet_v4:
  if subnet_v4.get_meta('dhcp_server') is not None:
  continue
  It seems that when vm use  neutron dhcp, network info will be ignored in 
meta_data.json

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1352105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367349] [NEW] ironic: Not listing all nodes registered in Ironic due pagination

2014-09-09 Thread Lucas Alvares Gomes
Public bug reported:

Ironic API supports pagination and limit the number of items returned by
the API based on a config option called ''max_limit", by default a max
of 1000 items is returned per request [1].

The Ironic client library by default respect that limit, so when the
Nova Ironic driver list the nodes for reasons like verifying how many
resources  we have available etc... We can hit that limit and the wrong
information will be passed to nova.

Luckly, the ironic client supports passing a limit=0 flag when listing
resources as an indicator to the lib to continue pagination until
there're no more resources to be returned [2]. We need to update the
calls in the Nova Ironic driver to make sure we get all items from the
API when needed.

 [1] 
https://github.com/openstack/ironic/blob/master/ironic/api/__init__.py#L26-L29
 [2] 
https://github.com/openstack/python-ironicclient/blob/master/ironicclient/v1/node.py#L52

** Affects: nova
 Importance: Undecided
 Assignee: Lucas Alvares Gomes (lucasagomes)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Lucas Alvares Gomes (lucasagomes)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367349

Title:
  ironic: Not listing all nodes registered in Ironic due pagination

Status in OpenStack Compute (Nova):
  New

Bug description:
  Ironic API supports pagination and limit the number of items returned
  by the API based on a config option called ''max_limit", by default a
  max of 1000 items is returned per request [1].

  The Ironic client library by default respect that limit, so when the
  Nova Ironic driver list the nodes for reasons like verifying how many
  resources  we have available etc... We can hit that limit and the
  wrong information will be passed to nova.

  Luckly, the ironic client supports passing a limit=0 flag when listing
  resources as an indicator to the lib to continue pagination until
  there're no more resources to be returned [2]. We need to update the
  calls in the Nova Ironic driver to make sure we get all items from the
  API when needed.

   [1] 
https://github.com/openstack/ironic/blob/master/ironic/api/__init__.py#L26-L29
   [2] 
https://github.com/openstack/python-ironicclient/blob/master/ironicclient/v1/node.py#L52

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282956] Re: ML2 : hard reboot a VM after a compute crash

2014-09-09 Thread Sean Dague
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282956

Title:
  ML2 : hard reboot a VM after a compute crash

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  I run in multi node setup with ML2, L2-population and Linuxbridge MD,
  and vxlan TypeDriver.

  I start two compute-nodes, I launch a VM, and I shutdown the compute-
  node which host the VM.

  I use this process to relaunch the VM on the other compute-node :

  http://docs.openstack.org/trunk/openstack-
  ops/content/maintenance.html#totle_compute_node_failure

  Once the VM is launched on the other compute node, fdb entries and
  neighbouring entries are no more populated on the network-node nor on
  the compute node

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1282956/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367342] [NEW] call to _set_instance_error_state is incorrect in do_build_and_run_instance

2014-09-09 Thread Kenneth Burger
Public bug reported:

nova/compute/manager.py  in do_build_and_run_instance
Under  except exception.RescheduledException as e:
...
self._set_instance_error_state(context, instance.uuid)

This should be passing instance only not instance.uuid

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367342

Title:
  call to _set_instance_error_state is incorrect in
  do_build_and_run_instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova/compute/manager.py  in do_build_and_run_instance
  Under  except exception.RescheduledException as e:
  ...
  self._set_instance_error_state(context, instance.uuid)

  This should be passing instance only not instance.uuid

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367344] [NEW] Libvirt Watchdog support is broken when ComputeCapabilitiesFilter is used

2014-09-09 Thread Roman Podoliaka
Public bug reported:

The doc (http://docs.openstack.org/admin-guide-cloud/content/customize-
flavors.html , section "Watchdog behavior") suggests to use the flavor
extra specs property called "hw_watchdog_action" to configure a watchdog
device for libvirt guests. Unfortunately, this is broken due to
ComputeCapabilitiesFilter trying to use this property to filter compute
hosts, so that scheduling of a new instance always fails with
NoValidHostFound error.

** Affects: nova
 Importance: Undecided
 Assignee: Roman Podoliaka (rpodolyaka)
 Status: New


** Tags: libvirt

** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367344

Title:
  Libvirt Watchdog support is broken when ComputeCapabilitiesFilter is
  used

Status in OpenStack Compute (Nova):
  New

Bug description:
  The doc (http://docs.openstack.org/admin-guide-cloud/content
  /customize-flavors.html , section "Watchdog behavior") suggests to use
  the flavor extra specs property called "hw_watchdog_action" to
  configure a watchdog device for libvirt guests. Unfortunately, this is
  broken due to ComputeCapabilitiesFilter trying to use this property to
  filter compute hosts, so that scheduling of a new instance always
  fails with NoValidHostFound error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1243742] Re: fail to start vnc during instance create snapshot

2014-09-09 Thread Sean Dague
Super old bug, marking as invalid

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1243742

Title:
  fail to start vnc during instance create snapshot

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  if we try to log into instance console during create snapshot of instance we 
fail. 
  nova compute log shows: 

  2013-10-23 13:46:18.681 3943 DEBUG qpid.messaging.io.ops [-] SENT[4464950]: 
SessionCompleted(commands=[0-41597]) write_op 
/usr/lib/python2.6/site-packages/qpid/messaging/driver.py:686
  2013-10-23 13:46:18.687 3943 ERROR nova.openstack.common.rpc.amqp 
[req-696761cc-53ad-4530-a053-84588eedca6e a660044c9b074450aaa45fba0d641fcc 
e27aae2598b94dca88cd0408406e0848] Exception during message handling
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 353, in 
decorated_function
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 90, in wrapped
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/exception.py", line 73, in wrapped
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 243, in 
decorated_function
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 229, in 
decorated_function
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 271, in 
decorated_function
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 258, in 
decorated_function
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 319, in 
decorated_function
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp % 
image_id, instance=instance)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 309, in 
decorated_function
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp *args, 
**kwargs)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2293, in 
snapshot_instance
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp 
task_states.IMAGE_SNAPSHOT)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2324, in 
_snapshot_instance
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp 
update_task_state)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1423, in 
snapshot
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp 
expected_state=task_states.IMAGE_PENDING_UPLOAD)
  2013-10-23 13:46:18.687 3943 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2321, in 
update_task_state
  2013-10-23 13:46:18.687 3943 TRACE nova.opens

[Yahoo-eng-team] [Bug 1265501] Re: TestAttachInterfaces fails on neutron w/parallelism

2014-09-09 Thread Sean Dague
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265501

Title:
  TestAttachInterfaces fails on neutron w/parallelism

Status in Tempest:
  Fix Committed

Bug description:
  Failure instance: http://logs.openstack.org/85/64185/1/experimental
  /check-tempest-dsvm-neutron-isolated-
  parallel/94ca5ac/console.html.gz#_2013-12-27_13_37_15_639

  This failure is the 3rd most frequent with parallel testing (after the
  error on port quota check and the timeout due to ssh protocol banner
  error)

  It seems the problem might lie in the fact that the operation, when
  neutron is enabled, does not complete in the expected time (which
  seems to be 5 seconds for this test).

  Possible approaches:
  - increase timeout
  - enable multiple neutron api workers (might have side effects as pointed out 
by other contributors)
  - address the issue in the nova/neutron interface

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1265501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276778] Re: test_lock_unlock_server: failed to reach ACTIVE status and task state "None" within the required time

2014-09-09 Thread David Kranz
Searching for

failed to reach ACTIVE status and task state "None"

shows a lot of different bug tickets. This does not seem like a tempest
bug.

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276778

Title:
  test_lock_unlock_server: failed to reach ACTIVE status and task state
  "None" within the required time

Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  Invalid

Bug description:
   Traceback (most recent call last):
 File "tempest/api/compute/servers/test_server_actions.py", line 419, in 
test_lock_unlock_server
   self.servers_client.wait_for_server_status(self.server_id, 'ACTIVE')
 File "tempest/services/compute/xml/servers_client.py", line 371, in 
wait_for_server_status
   raise_on_error=raise_on_error)
 File "tempest/common/waiters.py", line 89, in wait_for_server_status
  raise exceptions.TimeoutException(message)
   TimeoutException: Request timed out
   Details: Server c73d5bba-4f88-4279-8de6-9c66844e72e2 failed to reach ACTIVE 
status and task state "None" within the required time (196 s). Current status: 
SHUTOFF. Current task state: None.

  Source: http://logs.openstack.org/47/70647/3/gate/gate-tempest-dsvm-
  full/b8607e6/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1276778/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1232699] Re: nova-network: make sure bridge device is up before creating vlan

2014-09-09 Thread Sean Dague
** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1232699

Title:
  nova-network: make sure bridge device is up before creating vlan

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Currently, nova-network would create vlan device automatically if it
  doesn't exist on host. However, on creating vlan device, nova just
  makes sure the newly created vlan device is up, but not bridge
  interface. Thus, the vlan device is left on M-DOWN state on this
  situation, and network is not accessable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1232699/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367243] Re: Bad error report when trying to connect VM to an overcrouded Network

2014-09-09 Thread Sean Dague
seems like mostly a neutron issue

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367243

Title:
  Bad error report when trying to connect VM to an overcrouded Network

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  trying to create a Port when the network has no more addresses, Neutron 
returns this error:
  No more IP addresses available on network 
a4e997dc-ba2e--9394-cfd89f670886.

  However, when trying to create a VM in a network that has no more addresses, 
the VM is created with Error details:
  "No valid host was found"

  That's because the the compute agent is seeing the PortLimitExceeded error 
from neutron and reports the the scheduler.
  The scheduler mistakes that for Hypervisor limit and tries another compute 
(which fails for the same reason) and finally reports the error as "No host 
found", while the error has to be "No more IP addresses available on network 
a4e997dc-ba2e--9394-cfd89f670886"

  
  Neutron however, doesn't register any errors for this flow.

  Recreate:
  1. create a subnet with 31 mask bits (which means no ports available for VMs)
  2. Boot a VM. errors will be seen in nova-compute, and nova-scheduler

  nova compute:
  2014-09-09 14:30:11.517 123433 AUDIT nova.compute.manager 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Starting instance...
  2014-09-09 14:30:11.608 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Attempting claim: memory 64 MB, disk 0 
GB, VCPUs 1
  2014-09-09 14:30:11.608 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Total memory: 31952 MB, used: 1152.00 MB
  2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] memory limit: 47928.00 MB, free: 46776.00 
MB
  2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Total disk: 442 GB, used: 0.00 GB
  2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] disk limit not specified, defaulting to 
unlimited
  2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Total CPUs: 24 VCPUs, used: 10.00 VCPUs
  2014-09-09 14:30:11.610 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] CPUs limit: 384.00 VCPUs, free: 374.00 
VCPUs
  2014-09-09 14:30:11.610 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Claim successful
  2014-09-09 14:30:12.141 123433 WARNING nova.network.neutronv2.api [-] Neutron 
error: quota exceeded
  2014-09-09 14:30:12.142 123433 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager Traceback (most 
recent call last):
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1528, in 
_allocate_network_async
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 360, in 
allocate_for_instance
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager 
LOG.exception(msg, port_id)
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
  2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
  2014-09-09 14:

[Yahoo-eng-team] [Bug 1362528] Re: cirros starts with file system in read only mode

2014-09-09 Thread Sean Dague
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1362528

Title:
  cirros starts with file system in read only mode

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RhcnRpbmcgZHJvcGJlYXIgc3NoZDogbWtkaXI6IGNhbid0IGNyZWF0ZSBkaXJlY3RvcnkgJy9ldGMvZHJvcGJlYXInOiBSZWFkLW9ubHkgZmlsZSBzeXN0ZW1cIiBBTkQgdGFnczpcImNvbnNvbGVcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwOTIxNzMzOTM5OSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

  The VM boots incorrectly, the SSH service does not start, and the
  connection fails.

  http://logs.openstack.org/16/110016/7/gate/gate-tempest-dsvm-neutron-
  pg-full/603e3c6/console.html#_2014-08-26_08_59_39_951

  
  Only observed with neutron, 1 gate hit in 7 days.
  No hint about the issue in syslog or libvirt logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1362528/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364757] Re: VMWare: with a nubmer of VMs in place, compute process take a long time to get start

2014-09-09 Thread Sean Dague
This doesn't seem to be in scope, I don't understand why VMWare driver
is bothering to access existing vms.

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1364757

Title:
  VMWare: with a nubmer of VMs in place, compute process take a long
  time to get start

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  When there are a number of VMs in VCenter(1200vm+) managed by nova, the nova 
compute service need to take more than 
  1.5 h to get start. 

  Since the init host will try to sync all VMs power status from
  VCenter, with VM's nubmer increased, this will also cost a lot of
  time.  This will severely impact the the performance for getting
  compute service up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1364757/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366911] Re: Nova does not ensure a valid token is available if snapshot process exceeds token lifetime

2014-09-09 Thread Nikolay Starodubtsev
My idea is to use trust instead of token for the image upload. Need to
make some discovery in this direction, when I'll update the bug
description.

** Project changed: keystone => nova

** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Nikolay Starodubtsev (starodubcevna)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366911

Title:
  Nova does not ensure a valid token is available if snapshot process
  exceeds token lifetime

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Recently we encountered the following issue due to the change in
  Icehouse for the default lifetime of a token before it expires. It's
  now 1 hour, while previously it was 8.

  If a snapshot process takes longer than an hour, when it goes to the
  next phase it will fail with a 401 Unauthorized error because it has
  an invalid token.

  In our specific example the following would take place:

  1. User would set a snapshot to begin and a token would be associated with 
this request.
  2. Snapshot would be created, compression time would take about 55 minutes. 
Enough to just push the snapshotting of this instance over the 60 minute mark.
  3. Upon Image Upload ("Uploading image data for image" in the logs) Nova 
would then return a 401 Unauthorized error stating "This server could not 
verify that you are authorized to access the document you requested. Either you 
supplied the wrong credentials (e.g., bad password), or your browser does not 
understand how to supply the credentials required."

  Icehouse 2014.1.2, KVM as the hypervisor.

  The workaround is to specify a longer token timeout - however limits
  the ability to set short token expirations.

  A possible solution may be to get a new/refresh the token if the time
  has exceeded the timeout.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1366911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1341967] Re: Instance failed to create in openstack Icehouse with xen

2014-09-09 Thread Sean Dague
libvirt xen is largely untested, it is considered a class C supported
hypervisor.

** Summary changed:

- Instance failed to create in openstack Icehouse with xen
+ Instance failed to create in openstack Icehouse with libvirt xen

** Summary changed:

- Instance failed to create in openstack Icehouse with libvirt xen
+ Windows 7 64bit instance failed to create in openstack Icehouse with libvirt 
xen

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1341967

Title:
  Windows 7 64bit instance failed to create in openstack Icehouse with
  libvirt xen

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  Hi
  openstack Icehouse with Xen4 on Cent OS6.5.
  Toolstack:Libvirt

  We have installed one controller with compute1  + compute2.

  While creating instance it failed with error "  Instance failed to
  spawn"

  Please find attached logs of nova compute and Scheduler.

  Nova.conf

  rpc_backend = qpid
  qpid_hostname = 192.168.1.6
  my_ip = 192.168.1.6
  vncserver_listen = 192.168.1.6
  vncserver_proxyclient_address = 192.168.1.6
  vnc_enabled=True
  libvirt_ovs_bridge=br-int
  libvirt_vif_type=ethernet
  libvirt_use_virtio_for_bridges=True
  libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
  rpc_backend = qpid
  qpid_hostname = 192.168.1.6
  my_ip = 192.168.1.6
  vncserver_listen = 192.168.1.6
  vncserver_proxyclient_address = 192.168.1.6
  vnc_enabled=True
  libvirt_ovs_bridge=br-int
  libvirt_vif_type=ethernet
  libvirt_use_virtio_for_bridges=True
  libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
  instance_usage_audit=True
  instance_usage_audit_period=hour
  notification_driver=nova.openstack.common.notifier.rpc_notifier
  compute_driver = libvirt.LibvirtDriver
  instance_usage_audit=True
  instance_usage_audit_period=hour
  notification_driver=nova.openstack.common.notifier.rpc_notifier
  compute_driver = libvirt.LibvirtDriver

  Nova Scheduler log:

  2014-07-14 18:47:58.261 4152 WARNING nova.scheduler.filters.compute_filter 
[req-7c3ad960-9b93-4d4e-a43e-090d5c08d6ba 818ef8e8816b426b95d3708bc7949fc1 
ea31329396294185bfba564a02cc50d0] (compute1, compute1) ram:29607 disk:21504 
io_ops:1 instances:2 has not been heard from in a while
  2014-07-14 18:48:01.614 4152 ERROR nova.scheduler.filter_scheduler 
[req-7c3ad960-9b93-4d4e-a43e-090d5c08d6ba 818ef8e8816b426b95d3708bc7949fc1 
ea31329396294185bfba564a02cc50d0] [instance: 
dade5a94-0d65-4bcb-a60d-27fd334b5b77] Error from last host: controller (node 
controller): [u'Traceback (most recent call last):\n', u'  File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1305, in 
_build_instance\nset_access_ip=set_access_ip)\n', u'  File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 393, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n', u'  
File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1717, in 
_spawn\nLOG.exception(_(\'Instance failed to spawn\'), 
instance=instance)\n', u'  File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  File 
"/usr/lib/python2.6/site-packages/nova/compute/manager
 .py", line 1714, in _spawn\nblock_device_info)\n', u'  File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 2265, in 
spawn\nblock_device_info)\n', u'  File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3656, in 
_create_domain_and_network\npower_on=power_on)\n', u'  File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3559, in 
_create_domain\ndomain.XMLDesc(0))\n', u'  File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__\nsix.reraise(self.type_, self.value, self.tb)\n', u'  File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3554, in 
_create_domain\ndomain.createWithFlags(launch_flags)\n', u'  File 
"/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 179, in doit\n
result = proxy_call(self._autowrap, f, *args, **kwargs)\n', u'  File 
"/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 139, in proxy_call\n 
   rv = exec
 ute(f,*args,**kwargs)\n', u'  File 
"/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 77, in tworker\n
rv = meth(*args,**kwargs)\n', u'  File 
"/usr/lib64/python2.6/site-packages/libvirt.py", line 708, in createWithFlags\n 
   if ret == -1: raise libvirtError (\'virDomainCreateWithFlags() failed\', 
dom=self)\n', u'libvirtError: POST operation failed: xend_post: error from xen 
daemon: (xend.err "(\'create\', 
\'-aqcow2:/var/lib/nova/instances/dade5a94-0d65-4bcb-a60d-27fd334b5b77/disk\') 
failed (512  )")

[Yahoo-eng-team] [Bug 1334015] Re: "Server could not comply with the request" for .test_create_image_from_stopped_server

2014-09-09 Thread Sean Dague
This is a drive by bug without enough info to get to the bottom of it,
closing.

** No longer affects: openstack-ci

** Changed in: nova
   Status: New => Incomplete

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1334015

Title:
  "Server could not comply with the request" for
  .test_create_image_from_stopped_server

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  The following gate test failed for patchset
  https://review.openstack.org/#/c/98693/:

  
tempest.api.compute.images.test_images_negative.ImagesNegativeTestXML.test_create_image_from_stopped_server

  The console.log showed the following traceback:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "tempest/api/compute/images/test_images_negative.py", line 63, in 
test_create_image_from_stopped_server
  resp, server = self.create_test_server(wait_until='ACTIVE')
File "tempest/api/compute/base.py", line 247, in create_test_server
  raise ex
  BadRequest: Bad request
  Details: {'message': 'The server could not comply with the request since 
it is either malformed or otherwise incorrect.', 'code': '400'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1334015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340518] Re: can not use virsh console on serial terminal

2014-09-09 Thread Sean Dague
It is not clear what the use case is here, please explain further if
you'd like to take this forward. Also patches need to be provided via
gerrit not via the tracker.

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1340518

Title:
  can not use virsh console on serial terminal

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  In KVM case:
  We can not use virsh console on serial terminal.
  So, can not login to each vm using 'virsh console' command on terminal.
  Because  vm's config xml file is not support it now.
  This feature is so important for us.

  Please apply this patch:
  CONF.libvirt.virsh_console_serial=False (default. is same now)

  if you using virsh console then set
  CONF.libvirt.virsh_console_serial=True

  
  diff --git a/nova/virt/libvirt/config.py b/nova/virt/libvirt/config.py
  index 8eaf658..090e17b 100644
  --- a/nova/virt/libvirt/config.py
  +++ b/nova/virt/libvirt/config.py
  @@ -1053,6 +1053,9 @@ class 
LibvirtConfigGuestCharBase(LibvirtConfigGuestDevice):
   dev = super(LibvirtConfigGuestCharBase, self).format_dom()

   dev.set("type", self.type)
  +if self.root_name == "console":
  +dev.set("tty", self.source_path)
  +
   if self.type == "file":
   dev.append(etree.Element("source", path=self.source_path))
   elif self.type == "unix":
  diff --git a/nova/virt/libvirt/driver.py b/nova/virt/libvirt/driver.py
  index 9bd75fa..de2735e 100644
  --- a/nova/virt/libvirt/driver.py
  +++ b/nova/virt/libvirt/driver.py
  @@ -213,6 +213,9 @@ libvirt_opts = [
   help='A path to a device that will be used as source of '
'entropy on the host. Permitted options are: '
'/dev/random or /dev/hwrng'),
  +cfg.BoolOpt('virsh_console_serial',
  +default=False,
  +help='Use virsh console on serial terminal'),
   ]

   CONF = cfg.CONF
  @@ -3278,14 +3281,29 @@ class LibvirtDriver(driver.ComputeDriver):
   # client app is connected. Thus we can't get away
   # with a single type=pty console. Instead we have
   # to configure two separate consoles.
  -consolelog = vconfig.LibvirtConfigGuestSerial()
  -consolelog.type = "file"
  -consolelog.source_path = self._get_console_log_path(instance)
  -guest.add_device(consolelog)

  -consolepty = vconfig.LibvirtConfigGuestSerial()
  -consolepty.type = "pty"
  -guest.add_device(consolepty)
  +if CONF.libvirt.virsh_console_serial:  # Y.Kawada
  +consolepty = vconfig.LibvirtConfigGuestSerial()
  +consolepty.type = "pty"
  +consolepty.target_port = "0"
  +consolepty.source_path = "/dev/pts/11"
  +consolepty.alias_name = "serial0"
  +guest.add_device(consolepty)
  +
  +consolepty = vconfig.LibvirtConfigGuestConsole()
  +consolepty.type = "pty"
  +consolepty.target_port = "0"
  +consolepty.source_path = "/dev/pts/11"
  +consolepty.alias_name = "serial0"
  +else:
  +consolelog = vconfig.LibvirtConfigGuestSerial()
  +consolelog.type = "file"
  +consolelog.source_path = self._get_console_log_path(instance)
  +guest.add_device(consolelog)
  +
  +consolepty = vconfig.LibvirtConfigGuestSerial()
  +consolepty.type = "pty"
  +guest.add_device(consolepty)
   else:
   consolepty = vconfig.LibvirtConfigGuestConsole()
   consolepty.type = "pty"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1340518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1336048] Re: hosts added to the wrong host aggregate in DB sync

2014-09-09 Thread Sean Dague
Havana is about to go out of support, so I think there is no purpose in
fixing this at this point.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1336048

Title:
  hosts added to the wrong host aggregate in DB sync

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  This issue was found in stable/havana when we upgraded nova DB from
  folsom to havana.

  When there are multiple host aggregates associated with an
  availability zone, the migration logic add all hosts belong to the
  availability zone to the first host aggregate.

  The code at line 41 of
  /nova/nova/db/sqlalchemy/migrate_repo/versions/147_no_service_zones.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1336048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267362] Re: The meaning of "Disk GB Hours" is not really clear

2014-09-09 Thread Sean Dague
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1267362

Title:
  The meaning of "Disk GB Hours" is not really clear

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Manuals:
  Incomplete

Bug description:
  when visiting the admin overview page,
  http://localhost:8000/admin/

   the usage summary lists
   
  Disk GB Hours

  The same term "This Period's GB-Hours: 0.00" can be found e.g here:
  
https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/_usage_summary.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1267362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363568] Re: Nova scheduler no longer has access to requested_networks

2014-09-09 Thread Joe Cropper
This will need to be revisited in Kilo - nothing in the nova tree relies
on this at this point in time.

** Changed in: nova
 Assignee: Joe Cropper (jwcroppe) => (unassigned)

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363568

Title:
  Nova scheduler no longer has access to requested_networks

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  With the switch to nova-conductor being responsible for building the
  instances, the scheduler's select_destinations no longer has access to
  the requested networks.

  That is, when schedule_run_instance() was called in the nova-
  scheduler's process space (i.e., as it was in Icehouse), it had the
  ability to interrogate the networks being requested by the user to
  more intelligent placement decisions.

  This precludes schedulers from making placement decisions that are
  affected by the networks being requested at deploy time (i.e., because
  the networks aren't associated with the VMs in any way at deploy
  time).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1363568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300250] Re: Cannot attach volume to a livecd based vm instance

2014-09-09 Thread Sean Dague
** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1300250

Title:
  Cannot attach volume to a livecd based vm instance

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  It is impossible to attach volume to a LiveCD based vm instance. I
  managed to do this finally by hacking the nova database accordig to:
  http://paste.openstack.org/show/48247/

  Part of log with a problem:

  k/nova/nova/openstack/common/lockutils.py:252
  2014-03-31 13:29:44.117 ERROR nova.virt.block_device 
[req-2cfd1b0b-610f-40a0-8ed4-97ecd6128beb biocloud_psnc biocloud] [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] Driver failed to atta
  ch volume 1a046bbe-a326-4dbe-9f05-e3f2fa40a4e7 at /dev/hda
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] Traceback (most recent call last):
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 239, in attach
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] device_type=self['device_type'], 
encryption=encryption)
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1246, in attach_volume
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] disk_dev)
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9]   File 
"/opt/stack/nova/nova/openstack/common/excutils.py", line 68, in __exit__
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] six.reraise(self.type_, self.value, 
self.tb)
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1233, in attach_volume
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] 
virt_dom.attachDeviceFlags(conf.to_xml(), flags)
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 179, in d
  oit
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] result = proxy_call(self._autowrap, 
f, *args, **kwargs)
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 139, in p
  roxy_call
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] rv = execute(f,*args,**kwargs)
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 77, in tw
  orker
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] rv = meth(*args,**kwargs)
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9]   File 
"/usr/lib/python2.7/dist-packages/libvirt.py", line 420, in attachDeviceFl
  ags
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] if ret == -1: raise libvirtError 
('virDomainAttachDeviceFlags() failed', dom=
  self)
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9] libvirtError: invalid argument: target 
hda already exists.
  2014-03-31 13:29:44.117 21366 TRACE nova.virt.block_device [instance: 
5a1c854d-5571-4d0e-8414-ae60f57571d9]
  2014-03-31 13:29:44.162 DEBUG nova.volume.cinder 
[req-2cfd1b0b-610f-40a0-8ed4-97ecd6128beb biocloud_psnc biocloud] Cinderclient 
connection created using URL: http://biocloud.vph.psnc.pl:877
  6/v1/8e4b75100b0d42faa562c1b8f06984cf cinderclient 
/opt/stack/nova/nova/volume/cinder.py:93
  2014-03-31 13:29:44.167 21366 INFO requests.packages.urllib3.connectionpool 
[-] Starting new HTTP connection (1): biocloud.vph.psnc.pl
  2014-03-31 13:29:44.940 21366 DEBUG requests.packages.urllib3.connectionpool 
[-] "POST 
/v1/8e4b75100b0d42faa562c1b8f06984cf/volumes/1a046bbe-a326-4dbe-9f05-e3f2fa40a4e7/action
 HTTP/1.1" 202
   0 _make_request 
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:344
  2014-03-31 13:29:44.942 ERROR nova.compute.manager 

[Yahoo-eng-team] [Bug 1271706] Re: Misleading warning about MySQL TRADITIONAL mode not being set

2014-09-09 Thread Sean Dague
seems fixed in nova

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271706

Title:
  Misleading warning about MySQL TRADITIONAL mode not being set

Status in OpenStack Telemetry (Ceilometer):
  Fix Released
Status in Orchestration API (Heat):
  Fix Released
Status in OpenStack Identity (Keystone):
  Fix Released
Status in The Oslo library incubator:
  Fix Released

Bug description:
  common.db.sqlalchemy.session logs a scary warning if create_engine is
  not being called with mysql_traditional_mode set to True:

  WARNING keystone.openstack.common.db.sqlalchemy.session [-] This
  application has not enabled MySQL traditional mode, which means silent
  data corruption may occur. Please encourage the application developers
  to enable this mode.

  That warning is problematic for several reasons:

  (1) It suggests the wrong mode. Arguably TRADITIONAL is better than the 
default, but STRICT_ALL_TABLES would actually be more useful.
  (2) The user has no way to fix the warning.
  (3) The warning does not take into account that a global sql-mode may in fact 
have been set via the server-side MySQL configuration, in which case the 
session *may* in fact be using TRADITIONAL mode all along, despite the warning 
saying otherwise. This makes (2) even worse.

  My suggested approach would be:
  - Remove the warning.
  - Make the SQL mode a config option.

  Patches forthcoming.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1271706/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261551] Re: LXC volume attach does not work

2014-09-09 Thread Sean Dague
Apparmor issue

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261551

Title:
  LXC volume attach does not work

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  According to the older bug 1009701
  (https://bugs.launchpad.net/nova/+bug/1009701), LXC volume attach
  should begin working with newer versions of libvirt (1.0.1 or 1.0.2).
  Based on testing with libvirt version 1.1.x, however, I get the
  following error:

   libvirtError: Unable to create device /proc/4895/root/dev/sdb:
  Permission denied

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1261551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1234925] Re: nova-novncproxy logs 'No handlers could be found for logger "nova.openstack.common.rpc.common"' when rabbitmq is unavailable

2014-09-09 Thread Sean Dague
super old bug, olso.messaging is now where this should be if it still
exists

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1234925

Title:
  nova-novncproxy logs 'No handlers could be found for logger
  "nova.openstack.common.rpc.common"' when rabbitmq is unavailable

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  I get 'No handlers could be found for logger
  "nova.openstack.common.rpc.common"' when rabbitmq host is unavailable.
  If i add 'logging.basicConfig()' to the top of
  nova/openstack/common/log.py.  I get
  'ERROR:nova.openstack.common.rpc.common:AMQP server on x.x.x.x:5671 is
  unreachable: [Errno 111] ECONNREFUSED. Trying again in 11 seconds."

  It would seem that the python logging is getting masked or is
  uninitialize.

  I believe this should be reproducable with the following:

  
https://github.com/openstack/nova/tree/c64aeee362026c5e83f4c34e6469d59c529eeda7

  nova-novncproxy --config-file nova.conf --config-file rabbit.conf

  nova.conf:
  [DEFAULT]
  debug=True
  verbose=True

  rabbit.conf:
  [DEFAULT]
  rabbit_host = localhost
  rabbit_port = 
  rabbit_userid = guest
  rabbit_password = guest

  WARNING: no 'numpy' module, HyBi protocol will be slower  
  WebSocket server settings:
- Listen on 0.0.0.0:6080
- Flash security policy server  
- Web server. Web root: /usr/share/novnc
- No SSL/TLS support (no cert file)
- proxying from 0.0.0.0:6080 to ignore:ignore  

1: x.x.xx: new handler Process  
2: x.x.xx: new handler Process
3: x.x.xx: new handler Process  
1: x.x.xx: "GET /vnc_auto.html?token=-5fed-4b05--0f1b4795cdaa 
HTTP/1.1" 200 -
4: x.x.xx: new handler Process
5: x.x.xx: new handler Process
6: x.x.xx: new handler Process  
7: x.x.xx: new handler Process
7: x.x.xx: Plain non-SSL (ws://) WebSocket connection
7: x.x.xx: Version hybi-13, base64: 'True'  
7: x.x.xx: Path: '/websockify'
  No handlers could be found for logger "nova.openstack.common.rpc.common"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1234925/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367310] [NEW] Not supplying container format fails but still creates image in 'queued' state

2014-09-09 Thread Rushi Agrawal
Public bug reported:

r@ra:~$ glance image-create --name="demoimg" --disk-format=qcow2 --property 
kernel_id=65c3fe11-dc90-40d0-93ca-6855bc2a83fd --property 
ramdisk_id=bec768bc-9eac-41b2-90f4-fd861b7b8a05 < base.qcow2
Request returned failure status 400.

 
  400 Bad Request
 
 
  400 Bad Request
  Invalid container format 'None' for image.

 
 (HTTP 400)


But while running 'glance image-list', it still says there is an entry for this 
image, but in 'queued' state

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367310

Title:
  Not supplying container format fails but still creates image in
  'queued' state

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  r@ra:~$ glance image-create --name="demoimg" --disk-format=qcow2 --property 
kernel_id=65c3fe11-dc90-40d0-93ca-6855bc2a83fd --property 
ramdisk_id=bec768bc-9eac-41b2-90f4-fd861b7b8a05 < base.qcow2
  Request returned failure status 400.
  
   
400 Bad Request
   
   
400 Bad Request
Invalid container format 'None' for image.

   
   (HTTP 400)

  
  But while running 'glance image-list', it still says there is an entry for 
this image, but in 'queued' state

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367302] [NEW] Launch Instance security groups checkbox style incorrect

2014-09-09 Thread Justin Pomeroy
Public bug reported:

When launching an instance, the Security Groups checkbox on the Access &
Security tab is not correct.  The checkbox looks huge.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "security_groups_field.png"
   
https://bugs.launchpad.net/bugs/1367302/+attachment/4199714/+files/security_groups_field.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1367302

Title:
  Launch Instance security groups checkbox style incorrect

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When launching an instance, the Security Groups checkbox on the Access
  & Security tab is not correct.  The checkbox looks huge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1367302/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1366859] Re: Ironic: extra_spec requirement 'amd64' does not match 'x86_64'

2014-09-09 Thread Dan Prince
** Also affects: ironic
   Importance: Undecided
   Status: New

** Changed in: ironic
 Assignee: (unassigned) => Dan Prince (dan-prince)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1366859

Title:
  Ironic: extra_spec requirement 'amd64' does not match 'x86_64'

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  New
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Using the latest Nova Ironic compute drivers (either from Ironic or
  Nova) I'm hitting scheduling ERRORS:

  Sep 08 15:26:45 localhost nova-scheduler[29761]: 2014-09-08
  15:26:45.620 29761 DEBUG
  nova.scheduler.filters.compute_capabilities_filter [req-9e34510e-268c-
  40de-8433-d7b41017b54e None] extra_spec requirement 'amd64' does not
  match 'x86_64' _satisfies_extra_specs
  /opt/stack/venvs/nova/lib/python2.7/site-
  packages/nova/scheduler/filters/compute_capabilities_filter.py:70

  I've gone ahead and patched in
  https://review.openstack.org/#/c/117555/.

  The issue seems to be that ComputeCapabilitiesFilter does not itself
  canonicalize instance_types when comparing them which will breaks
  existing TripleO baremetal clouds using x86_64 (amd64).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1366859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1234026] Re: race condition: disable_dhcp_helper

2014-09-09 Thread Akihiro Motoki
*** This bug is a duplicate of bug 1251874 ***
https://bugs.launchpad.net/bugs/1251874

** This bug has been marked a duplicate of bug 1251874
   reduce severity of network notfound trace when looked up by dhcp agent

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1234026

Title:
  race condition: disable_dhcp_helper

Status in OpenStack Neutron (virtual network service):
  Fix Committed

Bug description:
  while investigate https://bugs.launchpad.net/bugs/1232525 , i find
  there is a race condition in disable_dhcp_helper, here is a gate log
  example: http://logs.openstack.org/20/48720/3/gate/gate-tempest-
  devstack-vm-neutron/f10cd53/logs/screen-q-dhcp.txt.gz?level=TRACE

  there is a periodic task (eventlet spawned) in dhcp_agent:
  periodic_resync, which will sync_state and
  disable_dhcp_helper(deleted_id)

  however, if there is a agent notify, network.delete.end (or something
  related like update, refresh may also cause), then disable_dhcp_helper
  will be invoked

  def disable_dhcp_helper(self, network_id):
  """Disable DHCP for a network known to the agent."""
  network = self.cache.get_network_by_id(network_id)
  if network:
  if (self.conf.use_namespaces and
  self.conf.enable_isolated_metadata):
  self.disable_isolated_metadata_proxy(network)
  if self.call_driver('disable', network):
  self.cache.remove(network)

  class NetworkCache(object):
  def remove(self, network):
  del self.cache[network.id]
  ...

  then there is a situation when two disable_dhcp_helper stack both
  think network is not none, but when one of them call self.cache.remove
  behind another one, then KeyError will be raised.

  my simplest fix is adding check network in remove(), such as

  if network.id not in self.cache:
  return

  but there still exists race, (even the time window is much smaller)

  I'm not quite sure about my opinion, any help?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1234026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1239606] Re: Use provider:physical_network to propagate it in NetworkInfo

2014-09-09 Thread Sean Dague
https://review.openstack.org/#/c/90666/ seems to have been merged and
the author of the patch above said that was the new fix

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1239606

Title:
  Use  provider:physical_network to propagate it in NetworkInfo

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  provider:physical_network is available in network/neutronv2/api as one
  of attributes of network objects  in method _nw_info_build_network. It
  should be used and added to the network returned by this method.
  Retrieving it from port.binding:profile dictionary should be removed,
  since maintaining this data on port level is supported by specific
  plugin only (Mellanox).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1239606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360650] Re: test_db_archive_deleted_rows failing in postgres jobs with ProgrammingError

2014-09-09 Thread Davanum Srinivas (DIMS)
Doesn't seem to be happening anymore...Let's reopen if necessary.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1360650

Title:
  test_db_archive_deleted_rows failing in postgres jobs with
  ProgrammingError

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  http://logs.openstack.org/99/106299/12/gate/gate-tempest-dsvm-neutron-
  pg-full/19a6b7d/console.html

  This is mostly in the neutron jobs:

  2014-08-23 17:15:04.731 | 
tempest.cli.simple_read_only.test_nova_manage.SimpleReadOnlyNovaManageTest.test_db_archive_deleted_rows
  2014-08-23 17:15:04.731 | 
---
  2014-08-23 17:15:04.731 | 
  2014-08-23 17:15:04.731 | Captured traceback:
  2014-08-23 17:15:04.732 | ~~~
  2014-08-23 17:15:04.732 | Traceback (most recent call last):
  2014-08-23 17:15:04.732 |   File 
"tempest/cli/simple_read_only/test_nova_manage.py", line 84, in 
test_db_archive_deleted_rows
  2014-08-23 17:15:04.732 | self.nova_manage('db archive_deleted_rows 
50')
  2014-08-23 17:15:04.732 |   File "tempest/cli/__init__.py", line 117, in 
nova_manage
  2014-08-23 17:15:04.732 | 'nova-manage', action, flags, params, 
fail_ok, merge_stderr)
  2014-08-23 17:15:04.733 |   File "tempest/cli/__init__.py", line 53, in 
execute
  2014-08-23 17:15:04.733 | result_err)
  2014-08-23 17:15:04.733 | CommandFailed: Command 
'['/usr/local/bin/nova-manage', 'db', 'archive_deleted_rows', '50']' returned 
non-zero exit status 1.
  2014-08-23 17:15:04.733 | stdout:
  2014-08-23 17:15:04.733 | Command failed, please check log for more info
  2014-08-23 17:15:04.733 | 
  2014-08-23 17:15:04.734 | stderr:
  2014-08-23 17:15:04.734 | 2014-08-23 17:02:31.331 CRITICAL nova 
[req-414244fa-d6c7-4868-8b78-8fe40f119b52 None None] ProgrammingError: 
(ProgrammingError) column "locked_by" is of type shadow_instances0locked_by but 
expression is of type instances0locked_by
  2014-08-23 17:15:04.734 | LINE 1: ...ces.cell_name, instances.node, 
instances.deleted, instances
  2014-08-23 17:15:04.734 | 
 ^
  2014-08-23 17:15:04.734 | HINT:  You will need to rewrite or cast the 
expression.
  2014-08-23 17:15:04.735 |  'INSERT INTO shadow_instances SELECT 
instances.created_at, instances.updated_at, instances.deleted_at, instances.id, 
instances.internal_id, instances.user_id, instances.project_id, 
instances.image_ref, instances.kernel_id, instances.ramdisk_id, 
instances.launch_index, instances.key_name, instances.key_data, 
instances.power_state, instances.vm_state, instances.memory_mb, 
instances.vcpus, instances.hostname, instances.host, instances.user_data, 
instances.reservation_id, instances.scheduled_at, instances.launched_at, 
instances.terminated_at, instances.display_name, instances.display_description, 
instances.availability_zone, instances.locked, instances.os_type, 
instances.launched_on, instances.instance_type_id, instances.vm_mode, 
instances.uuid, instances.architecture, instances.root_device_name, 
instances.access_ip_v4, instances.access_ip_v6, instances.config_drive, 
instances.task_state, instances.default_ephemeral_device, 
instances.default_swap_device, 
 instances.progress, instances.auto_disk_config, instances.shutdown_terminate, 
instances.disable_terminate, instances.root_gb, instances.ephemeral_gb, 
instances.cell_name, instances.node, instances.deleted, instances.locked_by, 
instances.cleaned, instances.ephemeral_key_uuid \nFROM instances \nWHERE 
instances.deleted != %(deleted_1)s ORDER BY instances.id \n LIMIT %(param_1)s' 
{'param_1': 39, 'deleted_1': 0}
  2014-08-23 17:15:04.735 | 2014-08-23 17:02:31.331 22441 TRACE nova 
Traceback (most recent call last):
  2014-08-23 17:15:04.735 | 2014-08-23 17:02:31.331 22441 TRACE nova   File 
"/usr/local/bin/nova-manage", line 10, in 
  2014-08-23 17:15:04.735 | 2014-08-23 17:02:31.331 22441 TRACE nova 
sys.exit(main())
  2014-08-23 17:15:04.736 | 2014-08-23 17:02:31.331 22441 TRACE nova   File 
"/opt/stack/new/nova/nova/cmd/manage.py", line 1401, in main
  2014-08-23 17:15:04.736 | 2014-08-23 17:02:31.331 22441 TRACE nova 
ret = fn(*fn_args, **fn_kwargs)
  2014-08-23 17:15:04.736 | 2014-08-23 17:02:31.331 22441 TRACE nova   File 
"/opt/stack/new/nova/nova/cmd/manage.py", line 920, in archive_deleted_rows
  2014-08-23 17:15:04.736 | 2014-08-23 17:02:31.331 22441 TRACE nova 
db.archive_deleted_rows(admin_context, max_rows)
  2014-08-23 17:15:04.736 | 2014-08-23 17:02:31.331 22441 TRACE nova   File 
"/opt/stack/new/nova/nova/db/api.py", line 1959, in archive_deleted_rows
  2014-08-23 17:15:04.737 | 2014-0

[Yahoo-eng-team] [Bug 1291991] Re: ipmi cmds run too fast, cause BMC to run out of resources

2014-09-09 Thread Dan Prince
** Changed in: nova
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291991

Title:
  ipmi cmds run too fast, cause BMC to run out of resources

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Fix Released
Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  When using Nova baremetal the IPMI power commands are still proving to
  be too fast. I routinely get stack traces that look like this when
  deleting baremetal instances:

  2 10:39:33.351 5112 TRACE nova.compute.manager [instance: 
032250f7-3255-47f5-b866-35687dcd14ce] ProcessExecutionError: Unexpected error 
while running command.
  Mar 12 10:39:33 undercloud-undercloud-7be7u2y6y5cz nova-compute[5112]: 
2014-03-12 10:39:33.351 5112 TRACE nova.compute.manager [instance: 
032250f7-3255-47f5-b866-35687dcd14ce] Command: ipmitool -I lanplus -H 10.1.8.23 
-U ooo-dev -f /tmp/tmpMa8D4u power status
  Mar 12 10:39:33 undercloud-undercloud-7be7u2y6y5cz nova-compute[5112]: 
2014-03-12 10:39:33.351 5112 TRACE nova.compute.manager [instance: 
032250f7-3255-47f5-b866-35687dcd14ce] Exit code: 1
  Mar 12 10:39:33 undercloud-undercloud-7be7u2y6y5cz nova-compute[5112]: 
2014-03-12 10:39:33.351 5112 TRACE nova.compute.manager [instance: 
032250f7-3255-47f5-b866-35687dcd14ce] Stdout: ''
  Mar 12 10:39:33 undercloud-undercloud-7be7u2y6y5cz nova-compute[5112]: 
2014-03-12 10:39:33.351 5112 TRACE nova.compute.manager [instance: 
032250f7-3255-47f5-b866-35687dcd14ce] Stderr: 'Error in open session response 
message : insufficient resources for session\n\nError: Unable to establish IPMI 
v2 / RMCP+ session\nUnable to get Chassis Power Status\n'
  Mar 12 10:39:33 undercloud-undercloud-7be7u2y6y5cz nova-compute[5112]: 
2014-03-12 10:39:33.351 5112 TRACE nova.compute.manager [instance: 
032250f7-3255-47f5-b866-35687dcd14ce]
  Mar 12 10:39:33 undercloud-undercloud-7be7u2y6y5cz nova-compute[5112]: 
2014-03-12 10:39:33.931 5112 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: Unexpected error while running command.

  

  The root cause seems to be in the _power_off routine which repeatedly
  calls "power status" to determine if the instance has properly powered
  down after issuing the "power off". Once this fails simply resetting
  the instance state and retrying the delete again usually fixes the
  issue.

  On the CLI the same commands always seem to work as well.

  It does seem like our retry code is still too aggressive and we need
  to wait longer for each IPMI retry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1291991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1294853] Re: service_get_all in nova.compute.api should return a List object and should not do a filtering

2014-09-09 Thread Sean Dague
** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
 Assignee: Pawel Koniszewski (pawel-koniszewski) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1294853

Title:
  service_get_all in nova.compute.api should return a List object and
  should not do a filtering

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  service_get_all is filtering the results returned by the service
  object and returning an array. This api should return a List object
  instead and the filtering should be done in the sqlalchemy api

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1294853/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1311500] Re: Nova 'os-security-group-default-rules' API does not work with neutron

2014-09-09 Thread Sean Dague
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1311500

Title:
  Nova 'os-security-group-default-rules' API does not work with neutron

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Nova APIs 'os-security-group-default-rules' does not work if
  'conf->security_group_api' is 'neutron'.

  I wrote the test cases for above Nova APIs
  (https://review.openstack.org/#/c/87924) and it fails in gate neutron
  tests.

  I further investigated this issue and found that in
  'nova/api/openstack/compute/contrib/security_group_default_rules.py',
  'security_group_api' is set according to  'conf->security_group_api'
  
(https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/security_group_default_rules.py#L107).

  If 'conf->security_group_api' is 'nova' then,
  'NativeNovaSecurityGroupAPI(NativeSecurityGroupExceptions,
  compute_api.SecurityGroupAPI)' is being used in this API and no issue
  here. It works fine.

  If 'conf->security_group_api' is 'neutron' then,
  'NativeNeutronSecurityGroupAPI(NativeSecurityGroupExceptions,
  neutron_driver.SecurityGroupAPI)' is being used in this API and
  'neutron_driver.SecurityGroupAPI'
  
(https://github.com/openstack/nova/blob/master/nova/network/security_group/neutron_driver.py#L48)
  does not have any of the  function which are being called from this
  API class. So gives AttributeError
  (http://logs.openstack.org/24/87924/2/check/check-tempest-dsvm-
  neutron-full/7951abf/logs/screen-n-api.txt.gz).

  Traceback -
  .
  .
  2014-04-21 00:44:22.430 10186 TRACE nova.api.openstack   File 
"/opt/stack/new/nova/nova/api/openstack/compute/contrib/security_group_default_rules.py",
 line 130, in create
  2014-04-21 00:44:22.430 10186 TRACE nova.api.openstack if 
self.security_group_api.default_rule_exists(context, values):
  2014-04-21 00:44:22.430 10186 TRACE nova.api.openstack AttributeError: 
'NativeNeutronSecurityGroupAPI' object has no attribute 'default_rule_exists'

  I think this API is only for Nova-network as currently there is no
  such feature exist in neutron. So this API should always use the nova
  network security group driver
  
(https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/security_groups.py#L669).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1311500/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1343858] Re: build/resize retry behavior not consistent

2014-09-09 Thread Sean Dague
** Changed in: nova
   Status: In Progress => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1343858

Title:
  build/resize retry behavior not consistent

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  nova/schedule/utils.py:

  Case1: when CONF.scheduler_max_attempts >1,  if the request contained an 
exception from a previous compute build/resize operation, the exception message 
would be logged in conductor.log
  Case2:when CONF.scheduler_max_attempts ==1,  if the request contained an 
exception from a previous compute build/resize operation, the exception message 
wouldnot be logged in conductor.log

  I think this two case should keep consistent behavior even this may
  not cause something wrong, just for Strict code

  
  def populate_retry(filter_properties, instance_uuid):
  max_attempts = _max_attempts()
  force_hosts = filter_properties.get('force_hosts', [])
  force_nodes = filter_properties.get('force_nodes', [])

  if max_attempts == 1 or force_hosts or force_nodes:
  # re-scheduling is disabled.
  return

  # retry is enabled, update attempt count:
  retry = filter_properties.setdefault(
  'retry', {
  'num_attempts': 0,
  'hosts': []  # list of compute hosts tried
  })
  retry['num_attempts'] += 1

  _log_compute_error(instance_uuid, retry)  <<< would not run here
  when  max_attempts == 1

  if retry['num_attempts'] > max_attempts:
  exc = retry.pop('exc', None)
  msg = (_('Exceeded max scheduling attempts %(max_attempts)d '
   'for instance %(instance_uuid)s. '
   'Last exception: %(exc)s.')
 % {'max_attempts': max_attempts,
'instance_uuid': instance_uuid,
'exc': exc})
  raise exception.NoValidHost(reason=msg)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1343858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1175667] Re: nova flavor-show does not return the 'latest' version of a flavor

2014-09-09 Thread Sean Dague
This looks fixed upstream

** Changed in: nova
   Status: In Progress => Fix Released

** Changed in: nova
   Importance: Undecided => Medium

** Changed in: nova
 Assignee: Darren Birkett (darren-birkett) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1175667

Title:
  nova flavor-show does not return the 'latest' version of a flavor

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  Create a new flavor:

  root@devstack1:/opt/stack/nova/nova# nova flavor-create nynewflavor 100 128 
20 1
  
+-+-+---+--+---+--+---+-+---+
  | ID  | Name| Memory_MB | Disk | Ephemeral | Swap | VCPUs | 
RXTX_Factor | Is_Public |
  
+-+-+---+--+---+--+---+-+---+
  | 100 | nynewflavor | 128   | 20   | 0 |  | 1 | 1.0   
  | True  |
  
+-+-+---+--+---+--+---+-+---+

  root@devstack1:/opt/stack/nova/nova# nova flavor-show 100
  ++-+
  | Property   | Value   |
  ++-+
  | name   | nynewflavor |
  | ram| 128 |
  | OS-FLV-DISABLED:disabled   | False   |
  | vcpus  | 1   |
  | extra_specs| {}  |
  | swap   | |
  | os-flavor-access:is_public | True|
  | rxtx_factor| 1.0 |
  | OS-FLV-EXT-DATA:ephemeral  | 0   |
  | disk   | 20  |
  | id | 100 |
  ++-+

  Delete the flavor and create a new flavor with the same flavorID:

  root@devstack1:/opt/stack/nova/nova# nova flavor-delete 100
  root@devstack1:/opt/stack/nova/nova# nova flavor-create nynewnewnewflavor 100 
128 20 1
  
+-+---+---+--+---+--+---+-+---+
  | ID  | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | 
RXTX_Factor | Is_Public |
  
+-+---+---+--+---+--+---+-+---+
  | 100 | nynewnewnewflavor | 128   | 20   | 0 |  | 1 | 1.0 
| True  |
  
+-+---+---+--+---+--+---+-+---+
  root@devstack1:/opt/stack/nova/nova# nova flavor-show 100
  ++---+
  | Property   | Value |
  ++---+
  | name   | nynewnewnewflavor |
  | ram| 128   |
  | OS-FLV-DISABLED:disabled   | False |
  | vcpus  | 1 |
  | extra_specs| {}|
  | swap   |   |
  | os-flavor-access:is_public | True  |
  | rxtx_factor| 1.0   |
  | OS-FLV-EXT-DATA:ephemeral  | 0 |
  | disk   | 20|
  | id | 100   |
  ++---+

  Delete this flavor and then flavor-show the ID

  root@devstack1:/opt/stack/nova/nova# nova flavor-delete 100
  root@devstack1:/opt/stack/nova/nova# nova flavor-show 100
  ++-+
  | Property   | Value   |
  ++-+
  | name   | nynewflavor |
  | ram| 128 |
  | OS-FLV-DISABLED:disabled   | False   |
  | vcpus  | 1   |
  | extra_specs| {}  |
  | swap   | |
  | os-flavor-access:is_public | True|
  | rxtx_factor| 1.0 |
  | OS-FLV-EXT-DATA:ephemeral  | 0   |
  | disk   | 20  |
  | id | 100 |
  ++-+

  I see the FIRST instance of the flavor.  I think I always want to see
  the latest version of a flavor, deleted or active.  Rinse and repeat
  with the create/deletes, I will only ever see the first version of
  that flavor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1175667/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1100799] Re: os-services API extension does not follow REST's CRUD principles

2014-09-09 Thread Sean Dague
I think all these API design points shouldn't be bugs any more

** Changed in: nova
   Importance: Undecided => Wishlist

** Changed in: nova
 Assignee: Tiago Rodrigues de Mello (timello) => (unassigned)

** Changed in: nova
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1100799

Title:
  os-services API extension does not follow REST's CRUD principles

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  os-services extension builds a non standard URL format for update
  action. The current URL is os-services/[enable|disable] and it should
  be os-services/ and pass the action via body instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1100799/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367243] [NEW] Bad error report when trying to connect VM to an overcrouded Network

2014-09-09 Thread Yair Fried
Public bug reported:

trying to create a Port when the network has no more addresses, Neutron returns 
this error:
No more IP addresses available on network a4e997dc-ba2e--9394-cfd89f670886.

However, when trying to create a VM in a network that has no more addresses, 
the VM is created with Error details:
"No valid host was found"

That's because the the compute agent is seeing the PortLimitExceeded error from 
neutron and reports the the scheduler.
The scheduler mistakes that for Hypervisor limit and tries another compute 
(which fails for the same reason) and finally reports the error as "No host 
found", while the error has to be "No more IP addresses available on network 
a4e997dc-ba2e--9394-cfd89f670886"


Neutron however, doesn't register any errors for this flow.

Recreate:
1. create a subnet with 31 mask bits (which means no ports available for VMs)
2. Boot a VM. errors will be seen in nova-compute, and nova-scheduler

nova compute:
2014-09-09 14:30:11.517 123433 AUDIT nova.compute.manager 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Starting instance...
2014-09-09 14:30:11.608 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Attempting claim: memory 64 MB, disk 0 
GB, VCPUs 1
2014-09-09 14:30:11.608 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Total memory: 31952 MB, used: 1152.00 MB
2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] memory limit: 47928.00 MB, free: 46776.00 
MB
2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Total disk: 442 GB, used: 0.00 GB
2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] disk limit not specified, defaulting to 
unlimited
2014-09-09 14:30:11.609 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Total CPUs: 24 VCPUs, used: 10.00 VCPUs
2014-09-09 14:30:11.610 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] CPUs limit: 384.00 VCPUs, free: 374.00 
VCPUs
2014-09-09 14:30:11.610 123433 AUDIT nova.compute.claims 
[req-19bf0ebe-4f8d-47bb-9506-617ae54cd6b4 e9bb9ce3fbe344e9b49182a13dcfb9c3 
a0a2f1afe57d422887b48c204d536df0] [instance: 
8f50312c-f47e-4793-aff7-a890f20ee2bb] Claim successful
2014-09-09 14:30:12.141 123433 WARNING nova.network.neutronv2.api [-] Neutron 
error: quota exceeded
2014-09-09 14:30:12.142 123433 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager Traceback (most 
recent call last):
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1528, in 
_allocate_network_async
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager 
dhcp_options=dhcp_options)
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 360, in 
allocate_for_instance
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager 
LOG.exception(msg, port_id)
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, 
in __exit__
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager 
six.reraise(self.type_, self.value, self.tb)
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 335, in 
allocate_for_instance
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager 
security_group_ids, available_macs, dhcp_opts)
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 191, in 
_create_port
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager raise 
exception.PortLimitExceeded()
2014-09-09 14:30:12.142 123433 TRACE nova.compute.manager PortLimitExceeded: 
Maximum number of ports

[Yahoo-eng-team] [Bug 1367229] [NEW] securitygroups_rpc is_firewall_enabled should return False if it is not a valid driver combination

2014-09-09 Thread Claudiu Belu
Public bug reported:

In neutron.agent.securitygroups_rpc, the method is_firewall_enabled,
there is the code:

def is_firewall_enabled():
if not _is_valid_driver_combination():
LOG.warn(_("Driver configuration doesn't match with "
   "enable_security_group"))

return cfg.CONF.SECURITYGROUP.enable_security_group

The function should return False if not _is_valid_driver_combination. 
Otherwise, it could return True in a case it shouldn't:
cfg.CONF.SECURITYGROUP.firewall_driver = 
'neutron.agent.firewall.NoopFirewallDriver'
cfg.CONF.SECURITYGROUP.enable_security_group = True

** Affects: neutron
 Importance: Undecided
 Assignee: Claudiu Belu (cbelu)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Claudiu Belu (cbelu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367229

Title:
  securitygroups_rpc is_firewall_enabled should return False if it is
  not a valid driver combination

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In neutron.agent.securitygroups_rpc, the method is_firewall_enabled,
  there is the code:

  def is_firewall_enabled():
  if not _is_valid_driver_combination():
  LOG.warn(_("Driver configuration doesn't match with "
 "enable_security_group"))

  return cfg.CONF.SECURITYGROUP.enable_security_group

  The function should return False if not _is_valid_driver_combination. 
Otherwise, it could return True in a case it shouldn't:
  cfg.CONF.SECURITYGROUP.firewall_driver = 
'neutron.agent.firewall.NoopFirewallDriver'
  cfg.CONF.SECURITYGROUP.enable_security_group = True

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367229/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367218] [NEW] Broken mysql connection causes internal server errorr

2014-09-09 Thread Jan Provaznik
Public bug reported:

When mysql connection is broken (mysql server is restarted or virtual IP
is moved around in typical HA setup), then keystone doesn't notice that
connection was closed on the other side and first request after this
outage fails. Because other openstack services autenticate incoming
requests with keystone, the "first after-outage" request fails no matter
what service is contacted.

I think the problem might be solved by catching DBConnectionError in sql
backend and reconnecting to the mysql server before internal server
error is returned to user. Alternatively it could be solved by adding
heartbeat checks for mysql connection (which is probably more complex).

Example of failed request and server side error:

[tripleo@dell-per720xd-01 tripleo]$ keystone service-list
Authorization Failed: An unexpected error prevented the server from fulfilling 
your request. (HTTP 500)

Server log:
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 ERROR keystone.common.wsgi [-] (OperationalError) 
(2006, 'MySQL server has gone away') 'SELECT user.id AS user_id, user.name AS 
user_name, user.domain_id AS user_domain_id, user.password AS user_password, 
user.enabled AS user_enabled, user.extra AS user_extra, user.default_project_id 
AS user_default_project_id \nFROM user \nWHERE user.name = %s AND 
user.domain_id = %s' ('admin', 'default')
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi Traceback (most recent 
call last):
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/common/wsgi.py",
 line 223, in __call__
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi result = 
method(context, **params)
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/token/controllers.py",
 line 100, in authenticate
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi context, auth)
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/token/controllers.py",
 line 287, in _authenticate_local
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi username, 
CONF.identity.default_domain_id)
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/identity/core.py",
 line 182, in wrapper
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi return f(self, 
*args, **kwargs)
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/identity/core.py",
 line 193, in wrapper
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi return f(self, 
*args, **kwargs)
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/identity/core.py",
 line 580, in get_user_by_name
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi ref = 
driver.get_user_by_name(user_name, domain_id)
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/keystone/identity/backends/sql.py",
 line 140, in get_user_by_name
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi user_ref = 
query.one()
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/sqlalchemy/orm/query.py",
 line 2369, in one
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.158 21782 TRACE keystone.common.wsgi ret = list(self)
Sep 09 08:26:43 overcloud-controller0-rgy4hdcgqchc keystone-all[21782]: 
2014-09-09 08:26:43.

[Yahoo-eng-team] [Bug 1367186] [NEW] Instances stuck with task_state of unshelving after RPC call timeout.

2014-09-09 Thread Takashi NATSUME
Public bug reported:

Instances stuck with task_state of unshelving after RPC call between
nova-conductor and nova-scheduler fails(because of, for example,
timeout) in the operation of unshelve.

The environment:
Ubuntu 14.04 LTS(64bit)
stable/icehouse(2014.1.2)
(I could also reproduce it with 
master(commit:a1fa42f2ad11258f8b9482353e078adcf73ee9c2).)

How to reproduce:
1. create a VM instance
2. shelve the VM instance
3. stop nova-scheduler process
4. unshelve the VM instance
(The nova-conductor calls the nova-scheduler, but the RPC call times out.)

Then the VM instance stucks with task_state of unshelving(See the following).
The VM instance still remains stuck even after nova-scheduler process starts 
again.

stack@devstack-icehouse:/opt/devstack$ nova list
+--+-+---++-+---+
| ID   | Name| Status| Task 
State | Power State | Networks  |
+--+-+---++-+---+
| 12e488e8-1df1-479d-866e-51c3117e384b | server1 | SHELVED_OFFLOADED | 
unshelving | Shutdown| public=10.0.2.194 |
+--+-+---++-+---+

nova-conductor.log:
---
2014-09-09 18:18:13.263 13087 ERROR oslo.messaging.rpc.dispatcher [-] Exception 
during message handling: Timed out waiting for a reply to message ID 
934be80a9798443597f355d60fa08e56
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/conductor/manager.py", line 849, in unshelve_instance
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher instance)
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/conductor/manager.py", line 816, in _schedule_instances
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
request_spec, filter_properties)
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/nova/nova/scheduler/rpcapi.py", line 103, in select_destinations
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
request_spec=request_spec, filter_properties=filter_properties)
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 
152, in call
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
retry=self.retry)
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, 
in _send
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout, retry=retry)
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", 
line 404, in send
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher 
retry=retry)
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", 
line 393, in _send
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher result = 
self._waiter.wait(msg_id, timeout)
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", 
line 281, in wait
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher reply, 
ending = self._poll_connection(msg_id, timeout)
2014-09-09 18:18:13.263 13087 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/_drivers/a

[Yahoo-eng-team] [Bug 1367172] [NEW] glance-manage db_load_metadefs should use 'with open as...'

2014-09-09 Thread Pawel Koniszewski
Public bug reported:

Currently script which loads metadeta definitions to database uses try
except block with open(file) inside. There is better solution in Python
for file streams - use 'with open(file) as...'. With this approach we
will be sure that every resource is cleaned up when the code finishes
running.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1367172

Title:
  glance-manage db_load_metadefs should use 'with open as...'

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Currently script which loads metadeta definitions to database uses try
  except block with open(file) inside. There is better solution in
  Python for file streams - use 'with open(file) as...'. With this
  approach we will be sure that every resource is cleaned up when the
  code finishes running.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1367172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367168] [NEW] Default fixtures with ID in Templated catalog

2014-09-09 Thread Marcos Lobo
Public bug reported:

On default_catalog.templates example file [1] there is no 'id' attribute
in the model but, in the templated catalog tests there are fixtures [2]
with 'id' attribute. Really, I'm not sure if this is a bug or not... but
I have some questions.

Is this correct?, Is templated backend ready to manage every new
attribute that we want include by ourselves?

[1] 
https://github.com/openstack/keystone/blob/master/etc/default_catalog.templates
[2] 
https://github.com/openstack/keystone/blob/master/keystone/tests/test_backend_templated.py#L38

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1367168

Title:
  Default fixtures with ID in Templated catalog

Status in OpenStack Identity (Keystone):
  New

Bug description:
  On default_catalog.templates example file [1] there is no 'id'
  attribute in the model but, in the templated catalog tests there are
  fixtures [2] with 'id' attribute. Really, I'm not sure if this is a
  bug or not... but I have some questions.

  Is this correct?, Is templated backend ready to manage every new
  attribute that we want include by ourselves?

  [1] 
https://github.com/openstack/keystone/blob/master/etc/default_catalog.templates
  [2] 
https://github.com/openstack/keystone/blob/master/keystone/tests/test_backend_templated.py#L38

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1367168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367157] [NEW] HA network remains even if there is no more HA router

2014-09-09 Thread Sylvain Afchain
Public bug reported:

Currently when the last HA router of a tenant is deleted the HA network
belonging to this tenant is not removed. This is the case in the
rollback of a router creation and in the delete_router itself.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1367157

Title:
  HA network remains even if there is no more HA router

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently when the last HA router of a tenant is deleted the HA
  network belonging to this tenant is not removed. This is the case in
  the rollback of a router creation and in the delete_router itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1367157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367151] [NEW] VMware: VM's created by VC 5.5 are no compatible with older clusters

2014-09-09 Thread Gary Kotton
Public bug reported:

VM's created by VC 5.0 and 5.1 will have hardware version 8. VM's
created by VC 5.5 will have hardware version 10. This break
compatibility between VM's on older clusters.

** Affects: nova
 Importance: Critical
 Assignee: Gary Kotton (garyk)
 Status: In Progress


** Tags: icehouse-backport-potential vmware

** Changed in: nova
   Importance: Undecided => High

** Changed in: nova
 Assignee: (unassigned) => Gary Kotton (garyk)

** Tags added: icehouse-backport-potential vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367151

Title:
  VMware: VM's created by VC 5.5 are no compatible with older clusters

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  VM's created by VC 5.0 and 5.1 will have hardware version 8. VM's
  created by VC 5.5 will have hardware version 10. This break
  compatibility between VM's on older clusters.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308171] Re: Ironic nova driver should be more efficient with ironic client calls

2014-09-09 Thread Dmitry "Divius" Tantsur
Ironic driver now leaves in Nova tree, so we can't fix this bug on
Ironic side.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: ironic
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308171

Title:
  Ironic nova driver should be more efficient with ironic client calls

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  When the nova ironic driver makes ironic client calls, it currently
  gets a brand new client for each call. This can be done more
  efficiently by caching the client and re-authenticating as needed
  (e.g., when the auth token expires). With recent changes to the driver
  to cleanup the retry logic, this should be easier to do. [1][2] The
  driver class can contain a single IronicClientWrapper reference to use
  for the client calls, and the IronicClientWrapper object can handle
  caching and reauthenticating as needed.

  [1] https://review.openstack.org/83105
  [2] https://review.openstack.org/86993

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1308171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338403] Re: circular reference detected with exception

2014-09-09 Thread Dmitry "Divius" Tantsur
Ironic driver now leaves in Nova tree, so we can't fix this bug on
Ironic side.

** Changed in: ironic
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1338403

Title:
  circular reference detected with exception

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  2014-07-07 02:10:08.727 10283 ERROR oslo.messaging.rpc.dispatcher 
[req-54c68afe-91a8-4a99-86e8-785c0abf7688 ] Exception during message handling: 
Circular reference detected
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 133, in _dispatch_and_reply
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 176, in _dispatch
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 122, in _do_dispatch
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 88, in wrapped
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/exception.py", 
line 71, in wrapped
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 336, in decorated_function
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
function(self, context, *args, **kwargs)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/compute/utils.py",
 line 437, in __exit__
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
exc_tb=exc_tb)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/objects/base.py", 
line 142, in wrapper
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher args, 
kwargs)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/conductor/rpcapi.py",
 line 355, in object_class_action
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
objver=objver, args=args, kwargs=kwargs)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/client.py",
 line 150, in call
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
wait_for_reply=True, timeout=timeout)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/transport.py",
 line 90, in _send
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher 
timeout=timeout)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
 line 412, in send
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher return 
self._send(target, ctxt, message, wait_for_reply, timeout)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
 line 385, in _send
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher msg = 
rpc_common.serialize_msg(msg)
  2014-07-07 02:10:08.727 10283 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/

[Yahoo-eng-team] [Bug 1289048] Re: Ironic nova driver spawn() makes too many redundant calls

2014-09-09 Thread Dmitry "Divius" Tantsur
This bug should now be fixed in Nova, as Ironic driver now lives there

** Summary changed:

- nova virt driver performance issue
+ Ironic nova driver spawn() makes too many redundant calls

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: ironic
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1289048

Title:
  Ironic nova driver spawn() makes too many redundant calls

Status in OpenStack Bare Metal Provisioning Service (Ironic):
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  The nova virt driver has a scale issue in that it makes way too many
  API calls, especially on behalf of spawn. I have submitted a review to
  remove at least 2 API calls by not having to re-query the node when
  plugging vifs.  But after that, we are still left with for spawn:

  spawn() gets node from Ironic
  spawn() updates node
  spawn() -> _add_driver_fields() updates node 'n' times where 'n' is # of 
required pxe fields
  spawn() makes a node validate call
  spawn() -> _plug_vifs() -> _unplug_vifs() lists ports.
  spawn() -> _plug_vifs() -> _unplug_vifs() makes 'n' calls to update ports 
where 'n' is number of ports.
  spawn() -> _plug_vifs() lists ports.
  spawn() -> _plug_vifs() makes 'n' calls to update ports where 'n' is number 
of ports.
  spawn() updates node to set 'active' provision state

  We need to figure out a way to make batch calls for some of these,
  IMO.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1289048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >