I even tried with bare .raw file but error is still the same

016-07-08 16:29:40.931 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Total memory: 193168 MB, used:
1024.00 MB

2016-07-08 16:29:40.931 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] memory limit: 289752.00 MB, free:
288728.00 MB

2016-07-08 16:29:40.932 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Total disk: 8168 GB, used: 1.00 GB

2016-07-08 16:29:40.932 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] disk limit: 8168.00 GB, free: 8167.00
GB

2016-07-08 16:29:40.948 86007 INFO nova.compute.claims
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Claim successful

2016-07-08 16:29:41.384 86007 INFO nova.virt.libvirt.driver
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Creating image

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Instance failed to spawn

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Traceback (most recent call last):

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in
_build_resources

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]     yield resources

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in
_build_and_run_instance

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]
block_device_info=block_device_info)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2531,
in spawn

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]     write_to_disk=True)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4427,
in _get_guest_xml

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]     context)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4286,
in _get_guest_config

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]     flavor, guest.os_type)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3387,
in _get_guest_storage_config

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]     inst_type)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]   File
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3320,
in _get_guest_disk_config

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]     raise exception.Invalid(msg)

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Invalid: Volume sets discard option,
but libvirt (1, 0, 6) or later is required, qemu (1, 6, 0) or later is
required.

2016-07-08 16:29:42.259 86007 ERROR nova.compute.manager [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01]

2016-07-08 16:29:42.261 86007 INFO nova.compute.manager
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Terminating instance

2016-07-08 16:29:42.267 86007 INFO nova.virt.libvirt.driver [-] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] During wait destroy, instance
disappeared.

2016-07-08 16:29:42.508 86007 INFO nova.virt.libvirt.driver
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Deleting instance files
/var/lib/nova/instances/cb6056a8-1bb9-4475-a702-9a2b0a7dca01_del

2016-07-08 16:29:42.509 86007 INFO nova.virt.libvirt.driver
[req-b43bbec9-c875-4f4b-ad2c-0d87a02bc7e1 289598890db341f4af45ce5c57c41ba3
713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
cb6056a8-1bb9-4475-a702-9a2b0a7dca01] Deletion of
/var/lib/nova/instances/cb6056a8-1bb9-4475-a702-9a2b0a7dca01_del complete

2016-07-08 16:30:16.167 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Auditing locally
available compute resources for node controller

2016-07-08 16:30:16.654 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Total usable vcpus:
40, total allocated vcpus: 0

2016-07-08 16:30:16.655 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Final resource view:
name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
used_disk=1GB total_vcpus=40 used_vcpus=0 pci_stats=None

2016-07-08 16:30:16.692 86007 INFO nova.compute.resource_tracker
[req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Compute_service record
updated for OSKVM1:controller

[root@OSKVM1 nova]# rpm -qa|grep libvirt

*libvirt*-client-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-config-network-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-network-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-storage-1.2.17-13.el7_2.5.x86_64

*libvirt*-1.2.17-13.el7_2.5.x86_64

*libvirt*-gobject-0.1.9-1.el7.x86_64

*libvirt*-daemon-driver-interface-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-qemu-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-config-nwfilter-1.2.17-13.el7_2.5.x86_64

*libvirt*-python-1.2.17-2.el7.x86_64

*libvirt*-daemon-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-nodedev-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-lxc-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-driver-nwfilter-1.2.17-13.el7_2.5.x86_64

*libvirt*-daemon-kvm-1.2.17-13.el7_2.5.x86_64

*libvirt*-glib-0.1.9-1.el7.x86_64

*libvirt*-daemon-driver-secret-1.2.17-13.el7_2.5.x86_64

*libvirt*-gconfig-0.1.9-1.el7.x86_64

[root@OSKVM1 nova]#

[root@OSKVM1 nova]#

[root@OSKVM1 nova]#

[root@OSKVM1 nova]# rpm -qa|grep qemu

ipxe-roms-*qemu*-20160127-1.git6366fa7a.el7.noarch

*qemu*-kvm-1.5.3-105.el7_2.4.x86_64

libvirt-daemon-driver-*qemu*-1.2.17-13.el7_2.5.x86_64

*qemu*-img-1.5.3-105.el7_2.4.x86_64

*qemu*-kvm-common-1.5.3-105.el7_2.4.x86_64

[root@OSKVM1 ~]# glance image-create --name "Fedora" --file
/root/Fedora-Cloud-Base-24-1.2.x86_64.raw --disk-format raw
--container-format bare --visibility public --progress

[=============================>] 100%

+------------------+----------------------------------------------------------------------------------+

| Property         | Value
                          |

+------------------+----------------------------------------------------------------------------------+

| checksum         | dc5e336d5ba97e092a5fdfd999501c25
                          |

| container_format | bare
                          |

| created_at       | 2016-07-08T20:19:22Z
                          |

| direct_url       |
rbd://9f923089-a6c0-4169-ace8-ad8cc4cca116/images/ef181b81-64ec-
      |

|                  | 47e5-9850-10cd40698284/snap
                          |

| disk_format      | raw
                          |

| id               | ef181b81-64ec-47e5-9850-10cd40698284
                          |

| min_disk         | 0
                          |

| min_ram          | 0
                          |

| name             | Fedora
                          |

| owner            | 7ee74061ac7c421a824a6e04e445d3d0
                          |

| protected        | False
                          |

| size             | 3221225472
                          |

| status           | active
                          |

| tags             | []
                          |

| updated_at       | 2016-07-08T20:27:10Z
                          |

On Fri, Jul 8, 2016 at 12:25 PM, Gaurav Goyal <er.gauravgo...@gmail.com>
wrote:

> [root@OSKVM1 ~]# grep -v "^#" /etc/nova/nova.conf|grep -v ^$
>
> [DEFAULT]
>
> instance_usage_audit = True
>
> instance_usage_audit_period = hour
>
> notify_on_state_change = vm_and_task_state
>
> notification_driver = messagingv2
>
> rbd_user=cinder
>
> rbd_secret_uuid=1989f7a6-4ecb-4738-abbf-2962c29b2bbb
>
> rpc_backend = rabbit
>
> auth_strategy = keystone
>
> my_ip = 10.1.0.4
>
> network_api_class = nova.network.neutronv2.api.API
>
> security_group_api = neutron
>
> linuxnet_interface_driver =
> nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
>
> firewall_driver = nova.virt.firewall.NoopFirewallDriver
>
> enabled_apis=osapi_compute,metadata
>
> [api_database]
>
> connection = mysql://nova:nova@controller/nova
>
> [barbican]
>
> [cells]
>
> [cinder]
>
> os_region_name = RegionOne
>
> [conductor]
>
> [cors]
>
> [cors.subdomain]
>
> [database]
>
> [ephemeral_storage_encryption]
>
> [glance]
>
> host = controller
>
> [guestfs]
>
> [hyperv]
>
> [image_file_url]
>
> [ironic]
>
> [keymgr]
>
> [keystone_authtoken]
>
> auth_uri = http://controller:5000
>
> auth_url = http://controller:35357
>
> auth_plugin = password
>
> project_domain_id = default
>
> user_domain_id = default
>
> project_name = service
>
> username = nova
>
> password = nova
>
> [libvirt]
>
> inject_password=false
>
> inject_key=false
>
> inject_partition=-2
>
> live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER,
> VIR_MIGRATE_LIVE, VIR_MIGRATE_PERSIST_DEST, VIR_MIGRATE_TUNNELLED
>
> disk_cachemodes ="network=writeback"
>
> images_type=rbd
>
> images_rbd_pool=vms
>
> images_rbd_ceph_conf =/etc/ceph/ceph.conf
>
> rbd_user=cinder
>
> rbd_secret_uuid=1989f7a6-4ecb-4738-abbf-2962c29b2bbb
>
> hw_disk_discard=unmap
>
> [matchmaker_redis]
>
> [matchmaker_ring]
>
> [metrics]
>
> [neutron]
>
> url = http://controller:9696
>
> auth_url = http://controller:35357
>
> auth_plugin = password
>
> project_domain_id = default
>
> user_domain_id = default
>
> region_name = RegionOne
>
> project_name = service
>
> username = neutron
>
> password = neutron
>
> service_metadata_proxy = True
>
> metadata_proxy_shared_secret = XXXXX
>
> [osapi_v21]
>
> [oslo_concurrency]
>
> lock_path = /var/lib/nova/tmp
>
> [oslo_messaging_amqp]
>
> [oslo_messaging_qpid]
>
> [oslo_messaging_rabbit]
>
> rabbit_host = controller
>
> rabbit_userid = openstack
>
> rabbit_password = XXXXX
>
> [oslo_middleware]
>
> [rdp]
>
> [serial_console]
>
> [spice]
>
> [ssl]
>
> [trusted_computing]
>
> [upgrade_levels]
>
> [vmware]
>
> [vnc]
>
> enabled = True
>
> vncserver_listen = 0.0.0.0
>
> novncproxy_base_url = http://controller:6080/vnc_auto.html
>
> vncserver_proxyclient_address = $my_ip
>
> [workarounds]
>
> [xenserver]
>
> [zookeeper]
>
>
> [root@OSKVM1 ceph]# ls -ltr
>
> total 24
>
> -rwxr-xr-x 1 root   root    92 May 10 12:58 rbdmap
>
> -rw------- 1 root   root     0 Jun 28 11:05 tmpfDt6jw
>
> -rw-r--r-- 1 root   root    63 Jul  5 12:59 ceph.client.admin.keyring
>
> -rw-r--r-- 1 glance glance  64 Jul  5 14:51 ceph.client.glance.keyring
>
> -rw-r--r-- 1 cinder cinder  64 Jul  5 14:53 ceph.client.cinder.keyring
>
> -rw-r--r-- 1 cinder cinder  71 Jul  5 14:54
> ceph.client.cinder-backup.keyring
>
> -rwxrwxrwx 1 root   root   438 Jul  7 14:19 ceph.conf
>
> [root@OSKVM1 ceph]# more ceph.client.cinder.keyring
>
> [client.cinder]
>
> key = AQCIAHxX9ga8LxAAU+S3Vybdu+Cm2bP3lplGnA==
>
> [root@OSKVM1 ~]# rados lspools
>
> rbd
>
> volumes
>
> images
>
> backups
>
> vms
>
> [root@OSKVM1 ~]# rbd -p rbd ls
>
> [root@OSKVM1 ~]# rbd -p volumes ls
>
> volume-27717a88-3c80-420f-8887-4ca5c5b94023
>
> volume-3bd22868-cb2a-4881-b9fb-ae91a6f79cb9
>
> volume-b9cf7b94-cfb6-4b55-816c-10c442b23519
>
> [root@OSKVM1 ~]# rbd -p images ls
>
> 9aee6c4e-3b60-49d5-8e17-33953e384a00
>
> a8b45c8a-a5c8-49d8-a529-1e4088bdbf3f
>
> [root@OSKVM1 ~]# rbd -p vms ls
>
> [root@OSKVM1 ~]# rbd -p backup
>
>
> *I could create cinder and  attach it to one of already built nova
> instance.*
>
> [root@OSKVM1 ceph]# nova volume-list
>
> WARNING: Command volume-list is deprecated and will be removed after Nova
> 13.0.0 is released. Use python-cinderclient or openstackclient instead.
>
>
> +--------------------------------------+-----------+------------------+------+-------------+--------------------------------------+
>
> | ID                                   | Status    | Display Name     |
> Size | Volume Type | Attached to                          |
>
>
> +--------------------------------------+-----------+------------------+------+-------------+--------------------------------------+
>
> | 14a572d0-2834-40d6-9650-cb3e18271963 | available | nova-vol_gg      | 10
>   | -           |                                      |
>
> | 3bd22868-cb2a-4881-b9fb-ae91a6f79cb9 | in-use    | nova-vol_1       | 2
>   | -           | d06f7c3b-5bbd-4597-99ce-fa981d2e10db |
>
> | 27717a88-3c80-420f-8887-4ca5c5b94023 | available | cinder-ceph-vol1 | 10
>   | -           |                                      |
>
>
> +--------------------------------------+-----------+------------------+------+-------------+--------------------------------------+
>
> On Fri, Jul 8, 2016 at 11:33 AM, Gaurav Goyal <er.gauravgo...@gmail.com>
> wrote:
>
>> Hi Kees,
>>
>> I regenerated the UUID as per your suggestion.
>> Now i have same UUID in host1 and host2.
>> I could create volumes and attach them to existing VMs.
>>
>> I could create new glance images.
>>
>> But still finding the same error while instance launch via GUI.
>>
>>
>> 2016-07-08 11:23:25.067 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Auditing locally
>> available compute resources for node controller
>>
>> 2016-07-08 11:23:25.527 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Total usable vcpus:
>> 40, total allocated vcpus: 0
>>
>> 2016-07-08 11:23:25.527 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Final resource view:
>> name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
>> used_disk=1GB total_vcpus=40 used_vcpus=0 pci_stats=None
>>
>> 2016-07-08 11:23:25.560 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Compute_service record
>> updated for OSKVM1:controller
>>
>> 2016-07-08 11:24:25.065 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Auditing locally
>> available compute resources for node controller
>>
>> 2016-07-08 11:24:25.561 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Total usable vcpus:
>> 40, total allocated vcpus: 0
>>
>> 2016-07-08 11:24:25.562 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Final resource view:
>> name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
>> used_disk=1GB total_vcpus=40 used_vcpus=0 pci_stats=None
>>
>> 2016-07-08 11:24:25.603 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Compute_service record
>> updated for OSKVM1:controller
>>
>> 2016-07-08 11:25:18.138 86007 INFO nova.compute.manager
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Starting instance...
>>
>> 2016-07-08 11:25:18.255 86007 INFO nova.compute.claims
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Attempting claim: memory 512 MB, disk
>> 1 GB
>>
>> 2016-07-08 11:25:18.255 86007 INFO nova.compute.claims
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Total memory: 193168 MB, used:
>> 1024.00 MB
>>
>> 2016-07-08 11:25:18.256 86007 INFO nova.compute.claims
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] memory limit: 289752.00 MB, free:
>> 288728.00 MB
>>
>> 2016-07-08 11:25:18.256 86007 INFO nova.compute.claims
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Total disk: 8168 GB, used: 1.00 GB
>>
>> 2016-07-08 11:25:18.257 86007 INFO nova.compute.claims
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] disk limit: 8168.00 GB, free: 8167.00
>> GB
>>
>> 2016-07-08 11:25:18.271 86007 INFO nova.compute.claims
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Claim successful
>>
>> 2016-07-08 11:25:18.747 86007 INFO nova.virt.libvirt.driver
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Creating image
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Instance failed to spawn
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Traceback (most recent call last):
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in
>> _build_resources
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     yield resources
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in
>> _build_and_run_instance
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]
>> block_device_info=block_device_info)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2527,
>> in spawn
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     admin_pass=admin_password)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2953,
>> in _create_image
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     instance, size,
>> fallback_from_host)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6406,
>> in _try_fetch_image_cache
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     size=size)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
>> 240, in cache
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     *args, **kwargs)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
>> 811, in create_image
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     prepare_template(target=base,
>> max_size=size, *args, **kwargs)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 254,
>> in inner
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     return f(*args, **kwargs)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
>> 230, in fetch_func_sync
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     fetch_func(target=target, *args,
>> **kwargs)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2947,
>> in clone_fallback_to_fetch
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     libvirt_utils.fetch_image(*args,
>> **kwargs)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/utils.py", line 408, in
>> fetch_image
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     max_size=max_size)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/images.py", line 123, in
>> fetch_to_raw
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     max_size=max_size)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/images.py", line 113, in fetch
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     IMAGE_API.download(context,
>> image_href, dest_path=path)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/image/api.py", line 182, in download
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     dst_path=dest_path)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 367, in
>> download
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]
>> _reraise_translated_image_exception(image_id)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 613, in
>> _reraise_translated_image_exception
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     six.reraise(new_exc, None,
>> exc_trace)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 365, in
>> download
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     image_chunks =
>> self._client.call(context, 1, 'data', image_id)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 231, in call
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     result = getattr(client.images,
>> method)(*args, **kwargs)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 148, in
>> data
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     % urlparse.quote(str(image_id)))
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 280,
>> in get
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     return self._request('GET', url,
>> **kwargs)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 272,
>> in _request
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     resp, body_iter =
>> self._handle_response(resp)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]   File
>> "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 93, in
>> _handle_response
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]     raise exc.from_response(resp,
>> resp.content)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Invalid: 400 Bad Request: Unknown
>> scheme 'file' found in URI (HTTP 400)
>>
>> 2016-07-08 11:25:19.126 86007 ERROR nova.compute.manager [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7]
>>
>> 2016-07-08 11:25:19.575 86007 INFO nova.compute.manager
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Terminating instance
>>
>> 2016-07-08 11:25:19.583 86007 INFO nova.virt.libvirt.driver [-]
>> [instance: bf4839c8-2af6-4959-9158-fe411e1cfae7] During wait destroy,
>> instance disappeared.
>>
>> 2016-07-08 11:25:19.665 86007 INFO nova.virt.libvirt.driver
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Deleting instance files
>> /var/lib/nova/instances/bf4839c8-2af6-4959-9158-fe411e1cfae7_del
>>
>> 2016-07-08 11:25:19.666 86007 INFO nova.virt.libvirt.driver
>> [req-3173f5b7-fa02-420c-954b-e21c3ce8d183 289598890db341f4af45ce5c57c41ba3
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> bf4839c8-2af6-4959-9158-fe411e1cfae7] Deletion of
>> /var/lib/nova/instances/bf4839c8-2af6-4959-9158-fe411e1cfae7_del complete
>>
>> 2016-07-08 11:25:26.073 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Auditing locally
>> available compute resources for node controller
>>
>> 2016-07-08 11:25:26.477 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Total usable vcpus:
>> 40, total allocated vcpus: 0
>>
>> 2016-07-08 11:25:26.478 86007 INFO nova.compute.resource_tracker
>> [req-4b7eccc8-0bf5-4f55-a941-4c93e97ef5df - - - - -] Final resource view:
>> name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
>> used_disk=1GB total_vcpus=40 used_vcpus=0 pci_stats=None
>>
>>
>> Regards
>> Gaurav Goyal
>>
>> On Fri, Jul 8, 2016 at 10:17 AM, Kees Meijs <k...@nefos.nl> wrote:
>>
>>> Hi,
>>>
>>> I'd recommend generating an UUID and use it for all your compute nodes.
>>> This way, you can keep your configuration in libvirt constant.
>>>
>>> Regards,
>>> Kees
>>>
>>> On 08-07-16 16:15, Gaurav Goyal wrote:
>>> >
>>> > For below section, should i generate separate UUID for both compte
>>> hosts?
>>> >
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to