Thanks for the verification!

Yeah i didnt find additional section for [ceph] in my cinder.conf file.
Should i create that manually?
As i didnt find [ceph] section so i modified same parameters in [DEFAULT]
section.
I will change that as per your suggestion.

Moreoevr checking some other links i got to know that, i must configure
following additional parameters
should i do that and install tgtadm package?

rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes

Do i need to execute following commands?

"pvcreate /dev/rbd1" &
"vgcreate cinder-volumes /dev/rbd1"


Regards

Gaurav Goyal



On Thu, Jul 7, 2016 at 10:02 PM, Jason Dillaman <jdill...@redhat.com> wrote:

> These lines from your log output indicates you are configured to use LVM
> as a cinder backend.
>
> > 2016-07-07 16:20:31.966 32549 INFO cinder.volume.manager
> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Starting volume
> driver LVMVolumeDriver (3.0.0)
> > 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Command: sudo
> cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o
> name cinder-volumes
>
> Looking at your provided configuration, I don't see a "[ceph]"
> configuration section. Here is a configuration example [1] for Cinder.
>
> [1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder
>
> On Thu, Jul 7, 2016 at 9:35 PM, Gaurav Goyal <er.gauravgo...@gmail.com>
> wrote:
>
>> Hi Kees/Fran,
>>
>>
>> Do you find any issue in my cinder.conf file?
>>
>> it says Volume group "cinder-volumes" not found. When to configure this
>> volume group?
>>
>> I have done ceph configuration for nova creation.
>> But i am still facing the same error .
>>
>>
>>
>> */var/log/cinder/volume.log*
>>
>> 2016-07-07 16:20:13.765 136259 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:20:23.770 136259 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:20:30.789 136259 WARNING oslo_messaging.server [-]
>> start/stop/wait must be called in the same thread
>>
>> 2016-07-07 16:20:30.791 136259 WARNING oslo_messaging.server
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] start/stop/wait must
>> be called in the same thread
>>
>> 2016-07-07 16:20:30.794 136247 INFO oslo_service.service
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Caught SIGTERM,
>> stopping children
>>
>> 2016-07-07 16:20:30.799 136247 INFO oslo_service.service
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Waiting on 1 children
>> to exit
>>
>> 2016-07-07 16:20:30.806 136247 INFO oslo_service.service
>> [req-f62eb1bb-6883-457f-9f63-b5556342eca7 - - - - -] Child 136259 killed by
>> signal 15
>>
>> 2016-07-07 16:20:31.950 32537 INFO cinder.volume.manager
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Determined volume DB
>> was not empty at startup.
>>
>> 2016-07-07 16:20:31.956 32537 INFO cinder.volume.manager
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Image-volume cache
>> disabled for host OSKVM1@ceph.
>>
>> 2016-07-07 16:20:31.957 32537 INFO oslo_service.service
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Starting 1 workers
>>
>> 2016-07-07 16:20:31.960 32537 INFO oslo_service.service
>> [req-cef7baaa-b0ef-4365-89d9-4379eb1c104c - - - - -] Started child 32549
>>
>> 2016-07-07 16:20:31.963 32549 INFO cinder.service [-] Starting
>> cinder-volume node (version 7.0.1)
>>
>> 2016-07-07 16:20:31.966 32549 INFO cinder.volume.manager
>> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Starting volume driver
>> LVMVolumeDriver (3.0.0)
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> [req-f9371a24-bb2b-42fb-ad4e-e2cfc271fe10 - - - - -] Failed to initialize
>> driver.
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Traceback (most
>> recent call last):
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/cinder/volume/manager.py", line 368, in
>> init_host
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> self.driver.check_for_setup_error()
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 105, in
>> wrapper
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager     return
>> f(*args, **kwargs)
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/cinder/volume/drivers/lvm.py", line 269,
>> in check_for_setup_error
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> lvm_conf=lvm_conf_file)
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 86,
>> in __init__
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager     if
>> self._vg_exists() is False:
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/cinder/brick/local_dev/lvm.py", line 123,
>> in _vg_exists
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> run_as_root=True)
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/cinder/utils.py", line 155, in execute
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager     return
>> processutils.execute(*cmd, **kwargs)
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager   File
>> "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line
>> 275, in execute
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> cmd=sanitized_cmd)
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>> ProcessExecutionError: Unexpected error while running command.
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Command: sudo
>> cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C vgs --noheadings -o
>> name cinder-volumes
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Exit code: 5
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Stdout: u''
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager Stderr: u'
>> Volume group "cinder-volumes" not found\n  Cannot process volume group
>> cinder-volumes\n'
>>
>> 2016-07-07 16:20:32.067 32549 ERROR cinder.volume.manager
>>
>> 2016-07-07 16:20:32.108 32549 INFO oslo.messaging._drivers.impl_rabbit
>> [req-7e229d1f-06af-4b60-8e15-1f8c0e6eb084 - - - - -] Connecting to AMQP
>> server on controller:5672
>>
>> 2016-07-07 16:20:32.125 32549 INFO oslo.messaging._drivers.impl_rabbit
>> [req-7e229d1f-06af-4b60-8e15-1f8c0e6eb084 - - - - -] Connected to AMQP
>> server on controller:5672
>>
>> 2016-07-07 16:20:42.141 32549 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:20:52.146 32549 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:21:02.152 32549 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:21:12.162 32549 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:21:22.166 32549 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:21:32.166 32549 ERROR cinder.service [-] Manager for
>> service cinder-volume OSKVM1@ceph is reporting problems, not sending
>> heartbeat. Service will appear "down".
>>
>> 2016-07-07 16:21:36.144 32549 WARNING cinder.volume.manager
>> [req-bdca26ed-5eb0-4647-8a5a-02925bacefed - - - - -] Update driver status
>> failed: (config name ceph) is uninitialized.
>>
>> */var/log/nova/nova-compute.log*
>>
>> 2016-07-07 16:21:53.287 31909 INFO nova.compute.resource_tracker
>> [req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Final resource view:
>> name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
>> used_disk=1GB total_vcpus=40 used_vcpus=1 pci_stats=None
>>
>> 2016-07-07 16:21:53.326 31909 INFO nova.compute.resource_tracker
>> [req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Compute_service record
>> updated for OSKVM1:controller
>>
>> 2016-07-07 16:22:17.305 31909 INFO nova.compute.manager
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Starting instance...
>>
>> 2016-07-07 16:22:17.417 31909 INFO nova.compute.claims
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Attempting claim: memory 512 MB, disk
>> 1 GB
>>
>> 2016-07-07 16:22:17.418 31909 INFO nova.compute.claims
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Total memory: 193168 MB, used:
>> 1024.00 MB
>>
>> 2016-07-07 16:22:17.418 31909 INFO nova.compute.claims
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] memory limit: 289752.00 MB, free:
>> 288728.00 MB
>>
>> 2016-07-07 16:22:17.418 31909 INFO nova.compute.claims
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Total disk: 8168 GB, used: 1.00 GB
>>
>> 2016-07-07 16:22:17.419 31909 INFO nova.compute.claims
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] disk limit: 8168.00 GB, free: 8167.00
>> GB
>>
>> 2016-07-07 16:22:17.430 31909 INFO nova.compute.claims
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Claim successful
>>
>> 2016-07-07 16:22:17.917 31909 INFO nova.virt.libvirt.driver
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Creating image
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Instance failed to spawn
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Traceback (most recent call last):
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2156, in
>> _build_resources
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     yield resources
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2009, in
>> _build_and_run_instance
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]
>> block_device_info=block_device_info)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2527,
>> in spawn
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     admin_pass=admin_password)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2953,
>> in _create_image
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     instance, size,
>> fallback_from_host)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6406,
>> in _try_fetch_image_cache
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     size=size)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
>> 240, in cache
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     *args, **kwargs)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
>> 811, in create_image
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     prepare_template(target=base,
>> max_size=size, *args, **kwargs)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 254,
>> in inner
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     return f(*args, **kwargs)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/imagebackend.py", line
>> 230, in fetch_func_sync
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     fetch_func(target=target, *args,
>> **kwargs)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2947,
>> in clone_fallback_to_fetch
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     libvirt_utils.fetch_image(*args,
>> **kwargs)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/libvirt/utils.py", line 408, in
>> fetch_image2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager
>> [instance: 39c047a0-4554-4160-a3fe-0943b3eed4a7]     max_size=max_size)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/images.py", line 123, in
>> fetch_to_raw
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     max_size=max_size)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/virt/images.py", line 113, in fetch
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     IMAGE_API.download(context,
>> image_href, dest_path=path)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/image/api.py", line 182, in download
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     dst_path=dest_path)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 367, in
>> download
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]
>> _reraise_translated_image_exception(image_id)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 613, in
>> _reraise_translated_image_exception
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     six.reraise(new_exc, None,
>> exc_trace)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 365, in
>> download
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     image_chunks =
>> self._client.call(context, 1, 'data', image_id)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 231, in call
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     result = getattr(client.images,
>> method)(*args, **kwargs)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 148, in
>> data
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     % urlparse.quote(str(image_id)))
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 280,
>> in get
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     return self._request('GET', url,
>> **kwargs)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 272,
>> in _request
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     resp, body_iter =
>> self._handle_response(resp)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]   File
>> "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 93, in
>> _handle_response
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]     raise exc.from_response(resp,
>> resp.content)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Invalid: 400 Bad Request: Unknown
>> scheme 'file' found in URI (HTTP 400)
>>
>> 2016-07-07 16:22:19.034 31909 ERROR nova.compute.manager [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7]
>>
>> 2016-07-07 16:22:19.041 31909 INFO nova.compute.manager
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Terminating instance
>>
>> 2016-07-07 16:22:19.050 31909 INFO nova.virt.libvirt.driver [-]
>> [instance: 39c047a0-4554-4160-a3fe-0943b3eed4a7] During wait destroy,
>> instance disappeared.
>>
>> 2016-07-07 16:22:19.120 31909 INFO nova.virt.libvirt.driver
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Deleting instance files
>> /var/lib/nova/instances/39c047a0-4554-4160-a3fe-0943b3eed4a7_del
>>
>> 2016-07-07 16:22:19.121 31909 INFO nova.virt.libvirt.driver
>> [req-94c123a2-b768-4e9e-a98e-e7bc10c3e592 db68bdf363ea4358a3d3c22bcfe18d13
>> 713114f3b9e54501a35a79e84c1e6c9d - - -] [instance:
>> 39c047a0-4554-4160-a3fe-0943b3eed4a7] Deletion of
>> /var/lib/nova/instances/39c047a0-4554-4160-a3fe-0943b3eed4a7_del complete
>>
>> 2016-07-07 16:22:53.779 31909 INFO nova.compute.resource_tracker
>> [req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Auditing locally
>> available compute resources for node controller
>>
>> 2016-07-07 16:22:53.868 31909 WARNING nova.virt.libvirt.driver
>> [req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Periodic task is
>> updating the host stat, it is trying to get disk instance-00000006, but
>> disk file was removed by concurrent operations such as resize.
>>
>> 2016-07-07 16:22:54.236 31909 INFO nova.compute.resource_tracker
>> [req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Total usable vcpus:
>> 40, total allocated vcpus: 1
>>
>> 2016-07-07 16:22:54.237 31909 INFO nova.compute.resource_tracker
>> [req-05a653d9-d629-497c-a4cd-d240c3e6c225 - - - - -] Final resource view:
>> name=controller phys_ram=193168MB used_ram=1024MB phys_disk=8168GB
>> used_disk=1GB total_vcpus=40 used_vcpus=1 pci_stats=None
>>
>>
>>
>> Regards
>>
>> Gaurav Goyal
>>
>>
>> On Thu, Jul 7, 2016 at 12:06 PM, Gaurav Goyal <er.gauravgo...@gmail.com>
>> wrote:
>>
>>> Hi Fran,
>>>
>>> Here is my cinder.conf file. Please help to analyze it.
>>>
>>> Do i need to create volume group as mentioned in this link
>>>
>>> http://docs.openstack.org/liberty/install-guide-rdo/cinder-storage-install.html
>>>
>>>
>>> [root@OSKVM1 ~]# grep -v "^#" /etc/cinder/cinder.conf|grep -v ^$
>>>
>>> [DEFAULT]
>>>
>>> rpc_backend = rabbit
>>>
>>> auth_strategy = keystone
>>>
>>> my_ip = 10.24.0.4
>>>
>>> notification_driver = messagingv2
>>>
>>> backup_ceph_conf = /etc/ceph/ceph.conf
>>>
>>> backup_ceph_user = cinder-backup
>>>
>>> backup_ceph_chunk_size = 134217728
>>>
>>> backup_ceph_pool = backups
>>>
>>> backup_ceph_stripe_unit = 0
>>>
>>> backup_ceph_stripe_count = 0
>>>
>>> restore_discard_excess_bytes = true
>>>
>>> backup_driver = cinder.backup.drivers.ceph
>>>
>>> glance_api_version = 2
>>>
>>> enabled_backends = ceph
>>>
>>> rbd_pool = volumes
>>>
>>> rbd_user = cinder
>>>
>>> rbd_ceph_conf = /etc/ceph/ceph.conf
>>>
>>> rbd_flatten_volume_from_snapshot = false
>>>
>>> rbd_secret_uuid = a536c85f-d660-4c25-a840-e321c09e7941
>>>
>>> rbd_max_clone_depth = 5
>>>
>>> rbd_store_chunk_size = 4
>>>
>>> rados_connect_timeout = -1
>>>
>>> volume_driver = cinder.volume.drivers.rbd.RBDDriver
>>>
>>> [BRCD_FABRIC_EXAMPLE]
>>>
>>> [CISCO_FABRIC_EXAMPLE]
>>>
>>> [cors]
>>>
>>> [cors.subdomain]
>>>
>>> [database]
>>>
>>> connection = mysql://cinder:cinder@controller/cinder
>>>
>>> [fc-zone-manager]
>>>
>>> [keymgr]
>>>
>>> [keystone_authtoken]
>>>
>>> auth_uri = http://controller:5000
>>>
>>> auth_url = http://controller:35357
>>>
>>> auth_plugin = password
>>>
>>> project_domain_id = default
>>>
>>> user_domain_id = default
>>>
>>> project_name = service
>>>
>>> username = cinder
>>>
>>> password = cinder
>>>
>>> [matchmaker_redis]
>>>
>>> [matchmaker_ring]
>>>
>>> [oslo_concurrency]
>>>
>>> lock_path = /var/lib/cinder/tmp
>>>
>>> [oslo_messaging_amqp]
>>>
>>> [oslo_messaging_qpid]
>>>
>>> [oslo_messaging_rabbit]
>>>
>>> rabbit_host = controller
>>>
>>> rabbit_userid = openstack
>>>
>>> rabbit_password = XXXX
>>>
>>> [oslo_middleware]
>>>
>>> [oslo_policy]
>>>
>>> [oslo_reports]
>>>
>>> [profiler]
>>>
>>> On Thu, Jul 7, 2016 at 11:38 AM, Fran Barrera <franbarre...@gmail.com>
>>> wrote:
>>>
>>>> Hello,
>>>>
>>>> Are you configured these two paremeters in cinder.conf?
>>>>
>>>> rbd_user
>>>> rbd_secret_uuid
>>>>
>>>> Regards.
>>>>
>>>> 2016-07-07 15:39 GMT+02:00 Gaurav Goyal <er.gauravgo...@gmail.com>:
>>>>
>>>>> Hello Mr. Kees,
>>>>>
>>>>> Thanks for your response!
>>>>>
>>>>> My setup is
>>>>>
>>>>> Openstack Node 1 -> controller + network + compute1 (Liberty Version)
>>>>> Openstack node 2 --> Compute2
>>>>>
>>>>> Ceph version Hammer
>>>>>
>>>>> I am using dell storage with following status
>>>>>
>>>>> DELL SAN storage is attached to both hosts as
>>>>>
>>>>> [root@OSKVM1 ~]# iscsiadm -m node
>>>>>
>>>>> 10.35.0.3:3260,1
>>>>> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-47700000018575af-vol1
>>>>>
>>>>> 10.35.0.8:3260,1
>>>>> iqn.2001-05.com.equallogic:0-1cb196-07a83c107-47700000018575af-vol1
>>>>>
>>>>> 10.35.0.*:3260,-1
>>>>> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-7290000002157606-vol2
>>>>>
>>>>> 10.35.0.8:3260,1
>>>>> iqn.2001-05.com.equallogic:0-1cb196-20d83c107-7290000002157606-vol2
>>>>>
>>>>> 10.35.0.*:3260,-1
>>>>> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a000000245761a-vol3
>>>>>
>>>>> 10.35.0.8:3260,1
>>>>> iqn.2001-05.com.equallogic:0-1cb196-f0783c107-70a000000245761a-vol3
>>>>>
>>>>> 10.35.0.*:3260,-1
>>>>> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-927000000275761a-vol4
>>>>> 10.35.0.8:3260,1
>>>>> iqn.2001-05.com.equallogic:0-1cb196-fda83c107-927000000275761a-vol4
>>>>>
>>>>>
>>>>> Since in my setup same LUNs are MAPPED to both hosts
>>>>>
>>>>> i choose 2 LUNS on Openstack Node 1 and 2 on Openstack Node 2
>>>>>
>>>>>
>>>>> *Node1 has *
>>>>>
>>>>> /dev/sdc1                2.0T  3.1G  2.0T   1% /var/lib/ceph/osd/ceph-0
>>>>>
>>>>> /dev/sdd1                2.0T  3.8G  2.0T   1% /var/lib/ceph/osd/ceph-1
>>>>>
>>>>> *Node 2 has *
>>>>>
>>>>> /dev/sdd1                2.0T  3.4G  2.0T   1% /var/lib/ceph/osd/ceph-2
>>>>>
>>>>> /dev/sde1                2.0T  3.5G  2.0T   1% /var/lib/ceph/osd/ceph-3
>>>>>
>>>>> [root@OSKVM1 ~]# ceph status
>>>>>
>>>>>     cluster 9f923089-a6c0-4169-ace8-ad8cc4cca116
>>>>>
>>>>>      health HEALTH_WARN
>>>>>
>>>>>             mon.OSKVM1 low disk space
>>>>>
>>>>>      monmap e1: 1 mons at {OSKVM1=10.24.0.4:6789/0}
>>>>>
>>>>>             election epoch 1, quorum 0 OSKVM1
>>>>>
>>>>>      osdmap e40: 4 osds: 4 up, 4 in
>>>>>
>>>>>       pgmap v1154: 576 pgs, 5 pools, 6849 MB data, 860 objects
>>>>>
>>>>>             13857 MB used, 8154 GB / 8168 GB avail
>>>>>
>>>>>              576 active+clean
>>>>>
>>>>> *Can you please help me to know if it is correct configuration as per
>>>>> my setup?*
>>>>>
>>>>> After this setup, i am trying to configure Cinder and Glance to use
>>>>> RBD for a backend.
>>>>> Glance image is already stored in RBD.
>>>>> Following this link
>>>>> http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>>>>>
>>>>> I have managed to install glance image in rbd. But i am finding some
>>>>> issue in cinder configuration. Can you please help me on this?
>>>>> As per link, i need to configure these parameters under [ceph] but i
>>>>> do not have different section for [ceph]. infact i could find all these
>>>>> parameters under [DEFAULT]. Is it ok to configure them under [DEFAULT].
>>>>> CONFIGURING CINDER
>>>>> <http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder>
>>>>>
>>>>> OpenStack requires a driver to interact with Ceph block devices. You
>>>>> must also specify the pool name for the block device. On your OpenStack
>>>>> node, edit/etc/cinder/cinder.conf by adding:
>>>>>
>>>>> [DEFAULT]
>>>>> ...
>>>>> enabled_backends = ceph
>>>>> ...
>>>>> [ceph]
>>>>> volume_driver = cinder.volume.drivers.rbd.RBDDriver
>>>>> rbd_pool = volumes
>>>>> rbd_ceph_conf = /etc/ceph/ceph.conf
>>>>> rbd_flatten_volume_from_snapshot = false
>>>>> rbd_max_clone_depth = 5
>>>>> rbd_store_chunk_size = 4
>>>>> rados_connect_timeout = -1
>>>>> glance_api_version = 2
>>>>>
>>>>> I find following error in cinder service status
>>>>>
>>>>> systemctl status openstack-cinder-volume.service
>>>>>
>>>>> Jul 07 09:37:01 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:01.058
>>>>> 136259 ERROR cinder.service [-] Manager for service cinder-volume
>>>>> OSKVM1@ceph is reporting problems, not sending heartbeat. Service
>>>>> will appear "down".
>>>>>
>>>>> Jul 07 09:37:02 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:02.040
>>>>> 136259 WARNING cinder.volume.manager
>>>>> [req-561ddd3c-9560-4374-a958-7a2c103af7ee - - - - -] Update driver status
>>>>> failed: (config name ceph) is uninitialized.
>>>>>
>>>>> Jul 07 09:37:11 OSKVM1 cinder-volume[136247]: 2016-07-07 09:37:11.059
>>>>> 136259 ERROR cinder.service [-] Manager for service cinder-volume
>>>>> OSKVM1@ceph is reporting problems, not sending heartbeat. Service
>>>>> will appear "down".
>>>>>
>>>>>
>>>>>
>>>>> [root@OSKVM2 ~]# rbd -p images ls
>>>>>
>>>>> a8b45c8a-a5c8-49d8-a529-1e4088bdbf3f
>>>>>
>>>>> [root@OSKVM2 ~]# rados df
>>>>>
>>>>> pool name                 KB      objects       clones     degraded
>>>>>   unfound           rd        rd KB           wr        wr KB
>>>>>
>>>>> backups                    0            0            0            0
>>>>>         0            0            0            0            0
>>>>>
>>>>> images               7013377          860            0            0
>>>>>         0         9486         7758         2580      7013377
>>>>>
>>>>> rbd                        0            0            0            0
>>>>>         0            0            0            0            0
>>>>>
>>>>> vms                        0            0            0            0
>>>>>         0            0            0            0            0
>>>>>
>>>>> volumes                    0            0            0            0
>>>>>         0            0            0            0            0
>>>>>
>>>>>   total used        14190236          860
>>>>>
>>>>>   total avail     8550637828
>>>>>
>>>>>   total space     8564828064
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> [root@OSKVM2 ~]# ceph auth list
>>>>>
>>>>> installed auth entries:
>>>>>
>>>>>
>>>>> mds.OSKVM1
>>>>>
>>>>> key: AQCK6XtXNBFdDBAAXmX73gBqK3lyakSxxP+XjA==
>>>>>
>>>>> caps: [mds] allow
>>>>>
>>>>> caps: [mon] allow profile mds
>>>>>
>>>>> caps: [osd] allow rwx
>>>>>
>>>>> osd.0
>>>>>
>>>>> key: AQAB4HtX7q27KBAAEqcuJXwXAJyD6a1Qu/MXqA==
>>>>>
>>>>> caps: [mon] allow profile osd
>>>>>
>>>>> caps: [osd] allow *
>>>>>
>>>>> osd.1
>>>>>
>>>>> key: AQC/4ntXFJGdFBAAADYH03iQTF4jWI1LnBZeJg==
>>>>>
>>>>> caps: [mon] allow profile osd
>>>>>
>>>>> caps: [osd] allow *
>>>>>
>>>>> osd.2
>>>>>
>>>>> key: AQCa43tXr12fDhAAzbq6FO2+8m9qg1B12/99Og==
>>>>>
>>>>> caps: [mon] allow profile osd
>>>>>
>>>>> caps: [osd] allow *
>>>>>
>>>>> osd.3
>>>>>
>>>>> key: AQA/5HtXDNfcLxAAJWawgxc1nd8CB+4uH/8fdQ==
>>>>>
>>>>> caps: [mon] allow profile osd
>>>>>
>>>>> caps: [osd] allow *
>>>>>
>>>>> client.admin
>>>>>
>>>>> key: AQBNknJXE/I2FRAA+caW02eje7GZ/uv1O6aUgA==
>>>>>
>>>>> caps: [mds] allow
>>>>>
>>>>> caps: [mon] allow *
>>>>>
>>>>> caps: [osd] allow *
>>>>>
>>>>> client.bootstrap-mds
>>>>>
>>>>> key: AQBOknJXjLloExAAGjMRfjp5okI1honz9Nx4wg==
>>>>>
>>>>> caps: [mon] allow profile bootstrap-mds
>>>>>
>>>>> client.bootstrap-osd
>>>>>
>>>>> key: AQBNknJXDUMFKBAAZ8/TfDkS0N7Q6CbaOG3DyQ==
>>>>>
>>>>> caps: [mon] allow profile bootstrap-osd
>>>>>
>>>>> client.bootstrap-rgw
>>>>>
>>>>> key: AQBOknJXQAUiABAA6IB4p4RyUmrsxXk+pv4u7g==
>>>>>
>>>>> caps: [mon] allow profile bootstrap-rgw
>>>>>
>>>>> client.cinder
>>>>>
>>>>> key: AQCIAHxX9ga8LxAAU+S3Vybdu+Cm2bP3lplGnA==
>>>>>
>>>>> caps: [mon] allow r
>>>>>
>>>>> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>>>>> pool=volumes, allow rwx pool=vms, allow rx pool=images
>>>>>
>>>>> client.cinder-backup
>>>>>
>>>>> key: AQCXAHxXAVSNKhAAV1d/ZRMsrriDOt+7pYgJIg==
>>>>>
>>>>> caps: [mon] allow r
>>>>>
>>>>> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>>>>> pool=backups
>>>>>
>>>>> client.glance
>>>>>
>>>>> key: AQCVAHxXupPdLBAA7hh1TJZnvSmFSDWbQiaiEQ==
>>>>>
>>>>> caps: [mon] allow r
>>>>>
>>>>> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>>>>> pool=images
>>>>>
>>>>>
>>>>> Regards
>>>>>
>>>>> Gaurav Goyal
>>>>>
>>>>> On Thu, Jul 7, 2016 at 2:54 AM, Kees Meijs <k...@nefos.nl> wrote:
>>>>>
>>>>>> Hi Gaurav,
>>>>>>
>>>>>> Unfortunately I'm not completely sure about your setup, but I guess it
>>>>>> makes sense to configure Cinder and Glance to use RBD for a backend.
>>>>>> It
>>>>>> seems to me, you're trying to store VM images directly on an OSD
>>>>>> filesystem.
>>>>>>
>>>>>> Please refer to http://docs.ceph.com/docs/master/rbd/rbd-openstack/
>>>>>> for
>>>>>> details.
>>>>>>
>>>>>> Regards,
>>>>>> Kees
>>>>>>
>>>>>> On 06-07-16 23:03, Gaurav Goyal wrote:
>>>>>> >
>>>>>> > I am installing ceph hammer and integrating it with openstack
>>>>>> Liberty
>>>>>> > for the first time.
>>>>>> >
>>>>>> > My local disk has only 500 GB but i need to create 600 GB VM. SO i
>>>>>> > have created a soft link to ceph filesystem as
>>>>>> >
>>>>>> > lrwxrwxrwx 1 root root 34 Jul 6 13:02 instances ->
>>>>>> > /var/lib/ceph/osd/ceph-0/instances [root@OSKVM1 nova]# pwd
>>>>>> > /var/lib/nova [root@OSKVM1 nova]#
>>>>>> >
>>>>>> > now when i am trying to create an instance it is giving the
>>>>>> following
>>>>>> > error as checked from nova-compute.log
>>>>>> > I need your help to fix this issue.
>>>>>> >
>>>>>>
>>>>>> _______________________________________________
>>>>>> ceph-users mailing list
>>>>>> ceph-users@lists.ceph.com
>>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@lists.ceph.com
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>>
>>>>>
>>>>
>>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>
>
> --
> Jason
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to