Hi

Can someone please help on this matter?

I have installed and configured OpenStack(Juno) environment on Ubuntu(14.04).
We have currently 1 controller node, 3 network nodes and 4 compute nodes as 
well as 4 swift nodes.
I'm using OpenStack Networking (neutron).
I'v recently introduced VLANs for tunnel-, mgmt- and external networks.

When I try to create a instance, with this cmd:

nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic 
net-id=7a344656-815c-4116-b697-b52f9fdc6e4c --security-group default --key-name 
demo-key demo-instance3

it fails.  The status from nova list is :
root@controller2:/# nova list
+--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
| ID                                   | Name            | Status | Task State 
| Power State | Networks              |
+--------------------------------------+-----------------+--------+------------+-------------+-----------------------+
| ca662fc0-2417-4da1-be2c-d6ccf90ed732 | demo-instance22 | ERROR  | -          
| NOSTATE     |                       |
| 17d26ca3-f56c-4a87-ae0a-acfafea4838c | demo-instance30 | ERROR  | -          
| NOSTATE     | demo-net=x.x.x.x |
+--------------------------------------+-----------------+--------+------------+-------------+-----------------------+

This error apperas in the /var/log/syslog file on the computer node:

Jun 26 14:55:04 compute5 kernel: [ 2187.597951]  nbd8: p1
Jun 26 14:55:04 compute5 kernel: [ 2187.668430] EXT4-fs (nbd8): VFS: Can't find 
ext4 filesystem
Jun 26 14:55:04 compute5 kernel: [ 2187.668521] EXT4-fs (nbd8): VFS: Can't find 
ext4 filesystem
Jun 26 14:55:04 compute5 kernel: [ 2187.668583] EXT4-fs (nbd8): VFS: Can't find 
ext4 filesystem
Jun 26 14:55:04 compute5 kernel: [ 2187.668899] FAT-fs (nbd8): bogus number of 
reserved sectors
Jun 26 14:55:04 compute5 kernel: [ 2187.668936] FAT-fs (nbd8): Can't find a 
valid FAT filesystem
Jun 26 14:55:04 compute5 kernel: [ 2187.753989] block nbd8: NBD_DISCONNECT
Jun 26 14:55:04 compute5 kernel: [ 2187.754056] block nbd8: Receive control 
failed (result -32)
Jun 26 14:55:04 compute5 kernel: [ 2187.754161] block nbd8: queue cleared

Also, this is logged on the computer node, in /var/log/nova/nova-compute.log

2015-06-26 14:55:02.591 7961 AUDIT nova.compute.claims [-] [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c] disk limit not specified, defaulting to 
unlimited
2015-06-26 14:55:02.606 7961 AUDIT nova.compute.claims [-] [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c] Claim successful
2015-06-26 14:55:02.721 7961 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('compute5')
2015-06-26 14:55:02.836 7961 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('compute5')
2015-06-26 14:55:03.115 7961 INFO nova.virt.libvirt.driver [-] [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c] Creating image
2015-06-26 14:55:03.118 7961 INFO nova.openstack.common.lockutils [-] Created 
lock path: /var/lib/nova/instances/locks
2015-06-26 14:55:03.458 7961 INFO nova.scheduler.client.report [-] 
Compute_service record updated for ('compute5', 'compute5.siminn.is')
2015-06-26 14:55:04.088 7961 INFO nova.virt.disk.vfs.api [-] Unable to import 
guestfsfalling back to VFSLocalFS
2015-06-26 14:55:04.363 7961 ERROR nova.compute.manager [-] [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c] Instance failed to spawn
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c] Traceback (most recent call last):
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2267, in 
_build_resources
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]     yield resources
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2137, in 
_build_and_run_instance
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]     block_device_info=block_device_info)
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2620, in 
spawn
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]     write_to_disk=True)
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4159, in 
_get_guest_xml
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]     context)
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3937, in 
_get_guest_config
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]     flavor, CONF.libvirt.virt_type)
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/vif.py", line 352, in 
get_config
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]     _("Unexpected vif_type=%s") % 
vif_type)
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c] NovaException: Unexpected 
vif_type=binding_failed
2015-06-26 14:55:04.363 7961 TRACE nova.compute.manager [instance: 
17d26ca3-f56c-4a87-ae0a-acfafea4838c]


Best regards
Yngvi
_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to