[Openstack] Failed to boot vm in OpenStack Grizzly

2013-07-06 Thread Arindam Choudhury
Hi,

I am deploying OpenStack Grizzly with Neutron(Quantum)+GRE using DevStack( 
Grizzly branch) on Fedora 18(spanish):

Whenever I try to instantiate a virtual machine I get the following error:

libvir: QEMU Driver error : error interno Proceso finalizado mientras se leía 
el resultado console del registro: chardev: opening backend "file" failed
Traceback (most recent call last):
  File "/usr/lib64/python2.7/logging/__init__.py", line 846, in emit
msg = self.format(record)
  File "/opt/stack/nova/nova/openstack/common/log.py", line 514, in format
return logging.StreamHandler.format(self, record)
  File "/usr/lib64/python2.7/logging/__init__.py", line 723, in format
return fmt.format(record)
  File "/opt/stack/nova/nova/openstack/common/log.py", line 477, in format
record.exc_text = self.formatException(record.exc_info, record)
  File "/opt/stack/nova/nova/openstack/common/log.py", line 497, in 
formatException
fl = '%s%s' % (pl, line)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 61: 
ordinal not in range(128)
Logged from file manager.py, line 1123

2013-07-06 17:13:45.405 ERROR nova.openstack.common.rpc.amqp 
[req-a1f436a8-a131-4494-8235-dbec3187c220 demo demo] 
Exception during message handling
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp Traceback (most 
recent call last):
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 430, i
n _process_data
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp rval = 
self.proxy.dispatch(ctxt, version, method, **args)
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 
133, in dispatch
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp return 
getattr(proxyobj, method)(ctxt, **kwargs)
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/exception.py", line 117, in wrapped
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp temp_level, 
payload)
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/exception.py", line 94, in wrapped
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp return f(self, 
context, *args, **kw)
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 209, in decorate
d_function
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp pass
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 195, in decorate
d_function
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 260, in decorate
d_function
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp function(self, 
context, *args, **kwargs)
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 237, in decorate
d_function
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp self.gen.next()
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 224, in decorate
d_function
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 1240, in run_ins
tance
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp 
do_run_instance()
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/openstack/common/lockutils.py", line 242, 
in inner
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp retval = 
f(*args, **kwargs)
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/manager.py", line 1239, in do_run_
instance
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp 
admin_password, is_first_time, node, instance)
2013-07-06 17:13:45.405 TRACE nova.openstack.common.rpc.amqp   File 
"/opt/stack/nova/nova/compute/

Re: [Openstack] [HyperV][Quantum] Quantum dhcp agent not working for Hyper-V

2013-07-06 Thread Alessandro Pilotti
Hi Bruno,

I just hit the same (or a very similar) issue doing a multinode deployment with 
RDO on CentOS 6.4 (OVS 1.10) while we had no problem until now using Ubuntu 
12.04 (OVS 1.4).
Can you please provide some more details about the Linux OS you are using and 
your multinode configuration?

I tested it with flat and VLAN networks, so far it doesn't look like an Hyper-V 
related issue.


Thanks,

Alessandro


On Jun 7, 2013, at 23:51 , Bruno Oliveira ~lychinus 
mailto:brunnop.olive...@gmail.com>> wrote:

"(...)Do you have your vSwitch properly configured on your hyper-v host?(...)"

I can't say for sure, Peter, but I think so...

In troubleshooting we did (and are still doing) I can tell that
regardless of the network model that we're using (FLAT or VLAN
Network),
the instance that is provisioned on Hyper-V (for some reason) can't
reach the quantum-l3-agent "by default"
(I said "default" because, we just managed to do it after a hard, long
and boring troubleshoting,
yet, we're not sure if that's how it should be done, indeed)

Since it's not something quick to explain, I'll present the scenario:
(I'm not sure if it might be a candidate for a fix in quantum-l3-agent,
so quantum-devs might be interested too)


Here's how our network interfaces turns out, in our network controller:

==
External bridge network
==

Bridge "br-eth1"
   Port "br-eth1"
   Interface "br-eth1"
   type: internal
   Port "eth1.11"
   Interface "eth1.11"
   Port "phy-br-eth1"
   Interface "phy-br-eth1"

==
Internal network
==

  Bridge br-int
   Port "int-br-eth1"
   Interface "int-br-eth1"
   Port br-int
   Interface br-int
   type: internal
   Port "tapb610a695-46"
   tag: 1
   Interface "tapb610a695-46"
   type: internal
   Port "qr-ef10bef4-fa"
   tag: 1
   Interface "qr-ef10bef4-fa"
   type: internal

==

There's another iface named "br-ex" that we're using for floating_ips,
but it has nothing to do with what we're doing right now, so I'm skipping it...


 So, for the hands-on 

I know it may be a little bit hard to understand, but I'll do my best
trying to explain:

1) the running instance in Hyper-V, which is linked to Hyper-V vSwitch
is actually
communicating to bridge: "br-eth1" (that is in the network controller).

NOTE: That's where the DHCP REQUEST (from the instance) lands


2) The interface MAC Address, of that running instance on Hyper-V, is:
fa:16:3e:95:95:e4. (we're gonna use it on later steps)
Since DHCP is not fully working yet, we had to manually set an IP for
that instance: "10.5.5.3"


3) From that instance interface, the dhcp_broadcast should be forward ->
  FROM interface "eth1.12" TO  "phy-br-eth1"
  And FROM interface "phy-br-eth1" TO the bridge "br-int"   *** THIS
IS WHERE THE PACKETS ARE DROPPED  ***.

Check it out for the "actions:drop"
-
root@osnetwork:~# ovs-dpctl dump-flows br-int  |grep 10.5.5.3

in_port(4),eth(src=fa:16:3e:f0:ac:8e,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=10.5.5.3,tip=10.5.5.1,op=1,sha=fa:16:3e:f0:ac:8e,tha=00:00:00:00:00:00),
packets:20, bytes:1120, used:0.412s, actions:drop
-

4) Finally, when the packet reaches the bridge "br-int", the
DHCP_REQUEST should be forward to the
  dhcp_interface, that is: tapb610a695-46*** WHICH IS NOT
HAPPENING EITHER ***


5) How to fix :: bridge br-eth1

---
5.1. Getting to know the ifaces of 'br-eth1'
---
root@osnetwork:~# ovs-ofctl show br-eth1

OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:e0db554e164b
n_tables:255, n_buffers:256 features: capabilities:0xc7, actions:0xfff

1(eth1.11): addr:e0:db:55:4e:16:4b
config: 0
state:  0
current:10GB-FD AUTO_NEG
advertised: 1GB-FD 10GB-FD FIBER AUTO_NEG
supported:  1GB-FD 10GB-FD FIBER AUTO_NEG

3(phy-br-eth1): addr:26:9b:97:93:b9:70
config: 0
state:  0
current:10GB-FD COPPER

LOCAL(br-eth1): addr:e0:db:55:4e:16:4b
config: 0
state:  0

OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0


---
5.2. Adding flow rules to enable passing (instead of dropping)
---

# the source mac_address (dl_src) is the from the interface of the
# running instance on Hyper-V. This fix the DROP (only)

root@osnetwork:~# ovs-ofctl add-flow br-eth1
priority=10,in_port=3,dl_src=fa:16:3e:95:95:e4,actions=normal



6) How to fix :: bridge br-int

---
6.

Re: [Openstack] [HyperV][Quantum] Quantum dhcp agent not working for Hyper-V

2013-07-06 Thread Hathaway.Jon
Hi Allessandro

I know this is probably something you have probably already tested for the RDO 
installation, but have you upgraded the CentOS kernel and the iproute package 
as both are missing the NetNS support required for Quantum. Ubuntu fixed this 
issue back in 10.04 but for whatever reason the current production kernel for 
CentOS still hasn't.

We had to update the kernel and the iproute package. If you check the log files 
for l3-agent, especially the dhcp logs you may find errors like "command not 
recognised" as the shipped iproute cor CentOS 6.4 doesn't support the netns 
extensions.

https://www.redhat.com/archives/rdo-list/2013-May/msg00015.html

My workaround was:

If installing on a EPEL6 distribution like Centos 6.4, there is a bug in the 
Kernel release which has disabled the Network Names (netns) support that is 
required to create overlapping networks in Quantum and is required to run the 
DHCP agent that assigns IP addresses on boot, and also setup the l3-agent that 
is responsible for forwarding requests from the instances to the API to 
retrieve any specific instance meta data.

A quick check on the node configured with Quantum in 
/var/log/quantum/dhcp-agent.log will show something like:

RuntimeError:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', '-o', 
'netns', 'list']
Exit code: 255
Stdout: ''
Stderr: 'Object "netns" is unknown, try "ip help".\n'

If you try and run 'ip netns' from the command and it fails, you will need to 
update the kernel and possibly the iproute2 package:

[root@oscontroller ~]# ip nets
Object "nets" is unknown, try "ip help".

Netns is available in the iproute2 package , but it requires additional support 
from the kernel. A new kernel has been released for testing only by Redhat 
version kernel-2.6.32-358.6.2.openstack.el6.x86_64 whilst the installed version 
is kernel- 2.6.32-358.el6.x86_64 that comes with Centos 6.4.

To add the new kernel and iproute2 packages requires updating the kernel and 
kernel-firmware packages from the Grizzly repository.

yum install 
http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/kernel-firmware-2.6.32-358.6.2.openstack.el6.noarch.rpm

yum install 
http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/kernel-2.6.32-358.6.2.openstack.el6.x86_64.rpm

yum install 
http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/iproute-2.6.32-23.el6_4.netns.1.x86_64.rpm

Check in /etc/grub.conf that the new kernel is being reference and then restart 
the node running Quantum.

After reboot, try and run the 'ip netns' it should run without an error.

If you have previously added an instance before upgrading the packages, you 
will need to remove the networks, routers and ports and re-add them before 
continuing. However - it is likely that you will end up with stake ports on the 
Quantum Server. As shown below:

[root@oscontroller quantum]# ovs-vsctl show
e4b86f82-2d16-49b1-9077-93abf2b32400
Bridge br-ex
Port "qg-3d8f69e7-5d"
Interface "qg-3d8f69e7-5d"
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port "qr-c7145535-d1"
tag: 1
Interface "qr-c7145535-d1"
type: internal
Port "tapc4fb5d73-3e"
tag: 1
Interface "tapc4fb5d73-3e"
type: internal
Port br-int
Interface br-int
type: internal
Port "tape76c5e3c-1b"
tag: 2
Interface "tape76c5e3c-1b"
type: internal
ovs_version: "1.10.0"

These ports/interfaces will need to be deleted before the networking will work 
successfully.

Just a thought.

Jon

From: Openstack 
[mailto:openstack-bounces+jon.hathaway=igt@lists.launchpad.net] On Behalf 
Of Alessandro Pilotti
Sent: 06 July 2013 16:23
To: OpenStack
Subject: Re: [Openstack] [HyperV][Quantum] Quantum dhcp agent not working for 
Hyper-V

Hi Bruno,

I just hit the same (or a very similar) issue doing a multinode deployment with 
RDO on CentOS 6.4 (OVS 1.10) while we had no problem until now using Ubuntu 
12.04 (OVS 1.4).
Can you please provide some more details about the Linux OS you are using and 
your multinode configuration?

I tested it with flat and VLAN networks, so far it doesn't look like an Hyper-V 
related issue.


Thanks,

Alessandro


On Jun 7, 2013, at 23:51 , Bruno Oliveira ~lychinus 
mailto:brunnop.olive...@gmail.com>> wrote:


"(...)Do you have your vSwitch properly configured on your hyper-v host?(...)"


I can't say for sure, Peter, but I think so...

In troubleshooting we did (and are still doing) I can tell that
regardless of the network model that we're using (FLAT or VLAN
Network),
the instance that is provisioned on Hyper-V (for some reason) can't
reach the quantum-l3-agent "by default"
(I said "default" because, we just managed to do it after a hard, long
and bo

Re: [Openstack] [HyperV][Quantum] Quantum dhcp agent not working for Hyper-V

2013-07-06 Thread Alessandro Pilotti
Hi Jon,

Thanks for your help! Both the kernel and the iproute packages are updated. RDO 
does a great job with this. Beside the 2.6.32 + netns kernel provided by RDO I 
also tested it with a 3.9.8, with the same results. I'd add to your 
troubleshooting steps a very simple test to check if netns is enabled in the 
kernel: checking if the "/proc/self/ns" path exists.

Back to the original issue, there are no errors on the Quantum side.


Thanks,

Alessandro




On Jul 7, 2013, at 02:36 , Hathaway.Jon 
mailto:jon.hatha...@igt.com>>
 wrote:

Hi Allessandro

I know this is probably something you have probably already tested for the RDO 
installation, but have you upgraded the CentOS kernel and the iproute package 
as both are missing the NetNS support required for Quantum. Ubuntu fixed this 
issue back in 10.04 but for whatever reason the current production kernel for 
CentOS still hasn’t.

We had to update the kernel and the iproute package. If you check the log files 
for l3-agent, especially the dhcp logs you may find errors like “command not 
recognised” as the shipped iproute cor CentOS 6.4 doesn’t support the netns 
extensions.

https://www.redhat.com/archives/rdo-list/2013-May/msg00015.html

My workaround was:

If installing on a EPEL6 distribution like Centos 6.4, there is a bug in the 
Kernel release which has disabled the Network Names (netns) support that is 
required to create overlapping networks in Quantum and is required to run the 
DHCP agent that assigns IP addresses on boot, and also setup the l3-agent that 
is responsible for forwarding requests from the instances to the API to 
retrieve any specific instance meta data.

A quick check on the node configured with Quantum in 
/var/log/quantum/dhcp-agent.log will show something like:

RuntimeError:
Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', '-o', 
'netns', 'list']
Exit code: 255
Stdout: ''
Stderr: 'Object "netns" is unknown, try "ip help".\n'

If you try and run ‘ip netns’ from the command and it fails, you will need to 
update the kernel and possibly the iproute2 package:

[root@oscontroller ~]# ip nets
Object "nets" is unknown, try "ip help".

Netns is available in the iproute2 package , but it requires additional support 
from the kernel. A new kernel has been released for testing only by Redhat 
version kernel-2.6.32-358.6.2.openstack.el6.x86_64 whilst the installed version 
is kernel- 2.6.32-358.el6.x86_64 that comes with Centos 6.4.

To add the new kernel and iproute2 packages requires updating the kernel and 
kernel-firmware packages from the Grizzly repository.

yum install 
http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/kernel-firmware-2.6.32-358.6.2.openstack.el6.noarch.rpm

yum install 
http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/kernel-2.6.32-358.6.2.openstack.el6.x86_64.rpm

yum install 
http://repos.fedorapeople.org/repos/openstack/openstack-grizzly/epel-6/iproute-2.6.32-23.el6_4.netns.1.x86_64.rpm

Check in /etc/grub.conf that the new kernel is being reference and then restart 
the node running Quantum.

After reboot, try and run the ‘ip netns’ it should run without an error.

If you have previously added an instance before upgrading the packages, you 
will need to remove the networks, routers and ports and re-add them before 
continuing. However – it is likely that you will end up with stake ports on the 
Quantum Server. As shown below:

[root@oscontroller quantum]# ovs-vsctl show
e4b86f82-2d16-49b1-9077-93abf2b32400
Bridge br-ex
Port "qg-3d8f69e7-5d"
Interface "qg-3d8f69e7-5d"
type: internal
Port br-ex
Interface br-ex
type: internal
Bridge br-int
Port "qr-c7145535-d1"
tag: 1
Interface "qr-c7145535-d1"
type: internal
Port "tapc4fb5d73-3e"
tag: 1
Interface "tapc4fb5d73-3e"
type: internal
Port br-int
Interface br-int
type: internal
Port "tape76c5e3c-1b"
tag: 2
Interface "tape76c5e3c-1b"
type: internal
ovs_version: "1.10.0"

These ports/interfaces will need to be deleted before the networking will work 
successfully.

Just a thought.

Jon

From: Openstack 
[mailto:openstack-bounces+jon.hathaway=igt@lists.launchpad.net]
 On Behalf Of Alessandro Pilotti
Sent: 06 July 2013 16:23
To: OpenStack
Subject: Re: [Openstack] [HyperV][Quantum] Quantum dhcp agent not working for 
Hyper-V

Hi Bruno,

I just hit the same (or a very similar) issue doing a multinode deployment with 
RDO on CentOS 6.4 (OVS 1.10) while we had no problem until now using Ubuntu 
12.04 (OVS 1.4).
Can you please provide some more details about the Linux OS you are using and 
your multinode configuration?

I tested it with flat and VLAN networks, so far it doesn't look li

Re: [Openstack] spawning instance fails due to glance client error

2013-07-06 Thread Mark A. Nye
I was finally able to determine the cause of this problem. A bug report
with full description has been submitted here:
https://bugs.launchpad.net/glance/+bug/1198566

best,
Mark


On Thu, Jun 27, 2013 at 7:57 PM, Mark A. Nye
wrote:

>
> Hello,
>
> We have a working two host Folsom (2012.2.3 on Ubuntu 12.04) OpenStack
> cluster with a controller and compute node. This afternoon we added a
> second compute node, but attempts to spawn instances on the new node fail
> with a glance client exception (see example log below).
>
> I've triple-checked our new nova.conf and api-paste.ini files, which are
> identical to what we have on the working compute node, except for the local
> IP address change to metadata_host, vncserver_proxyclient_address, and
> my_ip local.
>
> When I run "nova-manage service list" the nova-compute and nova-network
> services on all three machines report as status :-).
>
> The ONLY significant difference I'm seeing is that the new compute node is
> running the 2012.2.4 version of nova-network, nova-compute,
> and nova-api-metadata. I wouldn't expect this to be a problem, but maybe a
> bug or incompatibility was introduced in 2012.2.4? I'd like to try rolling
> back to 2012.2.3, but I can't find a copy of the older .deb packages.
>
> Am I missing something obvious? Can anyone offer a suggestion?
>
> best,
> Mark
>
>
>
>
>
> 2013-06-28 02:12:47 5290 ERROR nova.openstack.common.rpc.amqp [-]
> Exception during message handling
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp Traceback
> (most recent call last):
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line
> 276, in _process_data
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp rval =
> self.proxy.dispatch(ctxt, version, method, **args)
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py",
> line 145, in dispatch
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp return
> getattr(proxyobj, method)(ctxt, **kwargs)
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp
> temp_level, payload)
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp
> self.gen.next()
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/exception.py", line 92, in wrapped
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp return
> f(*args, **kw)
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 176, in
> decorated_function
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp pass
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp
> self.gen.next()
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 162, in
> decorated_function
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp return
> function(self, context, *args, **kwargs)
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 197, in
> decorated_function
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp
> kwargs['instance']['uuid'], e, sys.exc_info())
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp
> self.gen.next()
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 191, in
> decorated_function
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp return
> function(self, context, *args, **kwargs)
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 839, in
> run_instance
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp
> do_run_instance()
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/utils.py", line 803, in inner
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp retval =
> f(*args, **kwargs)
> 2013-06-28 02:12:47 5290 TRACE nova.openstack.common.rpc.amqp   File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 838, in
> do_run_in