Re: [Openstack] ceph and openstack

2016-03-08 Thread Erdősi Péter

2016. 03. 08. 6:04 keltezéssel, Martin Wilderoth írta:


Where should I run cinder-volume when i user ceph
on the controller ?
on the ceph mon or mds ?
or ?
If you use (and I think, you will) cinder-conversion, maybe goot to 
dedicate resource. (That role makes qcow to raw, which cause a lot of 
IO)

If you will, then the cinder-volume has an another place to put.

Regards,
 Peter
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] ceph and openstack

2016-03-08 Thread Martin Wilderoth
After loading RBD driver. Maybe my setup is incorrect.
Thanks

setup and error

rados pools

data
metadata
rbd
images
volumes
backups
vms


cinder-volume log

2016-03-08 07:39:38.713 7856 INFO cinder.service [-] Starting cinder-volume
node (version 7.0.1)
2016-03-08 07:39:38.715 7856 INFO cinder.volume.manager
[req-6e635315-b503-4e04-874e-53be817244ee - - - - -] Starting volume driver
RBDDriver (1.2.0)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
[req-6bc8b049-aafe-4302-8fe1-457dce30ed0f - - - - -] 'max_avail'
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup Traceback (most
recent call last):
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/oslo_service/threadgroup.py", line 154,
in wait
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup x.wait()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/oslo_service/threadgroup.py", line 51, in
wait
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup return
self.thread.wait()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 175, in
wait
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup return
self._exit_event.wait()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup return
hubs.get_hub().switch()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in switch
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup return
self.greenlet.switch()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 214, in
main
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup result =
function(*args, **kwargs)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 645, in
run_service
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
service.start()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/cinder/service.py", line 146, in start
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
self.manager.init_host()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in
wrapper
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup return
f(*args, **kwargs)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 378, in
init_host
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
self.driver.init_capabilities()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in
wrapper
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup return
f(*args, **kwargs)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/cinder/volume/driver.py", line 662, in
init_capabilities
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup stats =
self.get_volume_stats(True)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 105, in
wrapper
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup return
f(*args, **kwargs)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 420,
in get_volume_stats
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
self._update_volume_stats()
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup   File
"/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py", line 405,
in _update_volume_stats
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
pool_stats['max_avail'] // units.Gi)
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup KeyError:
'max_avail'
2016-03-08 07:39:38.776 7856 ERROR oslo_service.threadgroup
2016-03-08 07:39:38.781 5012 INFO oslo_service.service
[req-6bc8b049-aafe-4302-8fe1-457dce30ed0f - - - - -] Child 7856 exited with
status 0

I turned it of it was looping  forking to fast...

cinder.conf

[DEFAULT]
verbose=True
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.5.11
glance_host = controller
volume_driv

Re: [Openstack] ceph and openstack

2016-03-08 Thread Geo Varghese
which error it is showing?

On Tue, Mar 8, 2016 at 11:37 AM, Martin Wilderoth <
martin.wilder...@linserv.se> wrote:

> Thanks Both,
>
> I will run it on controller node.
>
> My cinder-volume crashed.
> Any dependensis or is my ceph cluster to old
> (Im running dumpling) I will investigate
>
> Thanks
>
>
> On 8 March 2016 at 06:32, Mike Smith  wrote:
>
>> If you are using Ceph as a Cinder backend, you would likely want to run
>> cinder-volume on your controller node(s).   You could run it anywhere I
>> suppose, including on the Ceph nodes themselves, but I’d recommend having
>> it on the controllers.  Wherever you run it, you’d need a properly
>> configured ceph.conf, and if you are using cephx authentication, you’d need
>> the keyring files.  Your compute nodes would need that conf and keys also.
>>
>> You can also run Ceph for nova ephemeral disks without Cinder at all.
>> You’d do that in nova.conf.
>>
>> We use both at Overstock.  Ceph for nova ephemeral for general use, and
>> also Ceph as one option in a multi-backend Cinder configuration.  We also
>> use it for a Glance store, which is a fantastic option because it makes
>> disk provisioning for Nova instant, since you’re essentially snapshotting
>> and image RBD into an RBD for Nova/Cinder.
>>
>> Mike Smith
>> Lead Cloud Systems Architect
>> Overstock.com
>>
>>
>>
>> On Mar 7, 2016, at 10:04 PM, Martin Wilderoth <
>> martin.wilder...@linserv.se> wrote:
>>
>>
>> Hello
>>
>> Where should I run cinder-volume when i user ceph
>> on the controller ?
>> on the ceph mon or mds ?
>> or ?
>>
>> Maybe it dosn't matter ?
>>
>> Thanks in advance
>>
>> Regards Martin
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>>
>> --
>>
>> CONFIDENTIALITY NOTICE: This message is intended only for the use and
>> review of the individual or entity to which it is addressed and may contain
>> information that is privileged and confidential. If the reader of this
>> message is not the intended recipient, or the employee or agent responsible
>> for delivering the message solely to the intended recipient, you are hereby
>> notified that any dissemination, distribution or copying of this
>> communication is strictly prohibited. If you have received this
>> communication in error, please notify sender immediately by telephone or
>> return email. Thank you.
>>
>
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>


-- 
--
Regards,
Geo Varghese
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] ceph and openstack

2016-03-07 Thread Martin Wilderoth
Thanks Both,

I will run it on controller node.

My cinder-volume crashed.
Any dependensis or is my ceph cluster to old
(Im running dumpling) I will investigate

Thanks


On 8 March 2016 at 06:32, Mike Smith  wrote:

> If you are using Ceph as a Cinder backend, you would likely want to run
> cinder-volume on your controller node(s).   You could run it anywhere I
> suppose, including on the Ceph nodes themselves, but I’d recommend having
> it on the controllers.  Wherever you run it, you’d need a properly
> configured ceph.conf, and if you are using cephx authentication, you’d need
> the keyring files.  Your compute nodes would need that conf and keys also.
>
> You can also run Ceph for nova ephemeral disks without Cinder at all.
> You’d do that in nova.conf.
>
> We use both at Overstock.  Ceph for nova ephemeral for general use, and
> also Ceph as one option in a multi-backend Cinder configuration.  We also
> use it for a Glance store, which is a fantastic option because it makes
> disk provisioning for Nova instant, since you’re essentially snapshotting
> and image RBD into an RBD for Nova/Cinder.
>
> Mike Smith
> Lead Cloud Systems Architect
> Overstock.com
>
>
>
> On Mar 7, 2016, at 10:04 PM, Martin Wilderoth 
> wrote:
>
>
> Hello
>
> Where should I run cinder-volume when i user ceph
> on the controller ?
> on the ceph mon or mds ?
> or ?
>
> Maybe it dosn't matter ?
>
> Thanks in advance
>
> Regards Martin
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
> --
>
> CONFIDENTIALITY NOTICE: This message is intended only for the use and
> review of the individual or entity to which it is addressed and may contain
> information that is privileged and confidential. If the reader of this
> message is not the intended recipient, or the employee or agent responsible
> for delivering the message solely to the intended recipient, you are hereby
> notified that any dissemination, distribution or copying of this
> communication is strictly prohibited. If you have received this
> communication in error, please notify sender immediately by telephone or
> return email. Thank you.
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] ceph and openstack

2016-03-07 Thread Joshua Harlow

On 03/07/2016 09:32 PM, Mike Smith wrote:

You can also run Ceph for nova ephemeral disks without Cinder at all.
  You’d do that in nova.conf.

We use both at Overstock.  Ceph for nova ephemeral for general use, and
also Ceph as one option in a multi-backend Cinder configuration.  We
also use it for a Glance store, which is a fantastic option because it
makes disk provisioning for Nova instant, since you’re essentially
snapshotting and image RBD into an RBD for Nova/Cinder.


Out of curiosity, how is the performance of doing this (using ceph for 
nova ephemeral disks); any details u can share on the network required 
to do this (decently) and what app. usage patterns u have that can 
tolerate the latency (whatever it is)?


-Josh

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] ceph and openstack

2016-03-07 Thread Mike Smith
If you are using Ceph as a Cinder backend, you would likely want to run 
cinder-volume on your controller node(s).   You could run it anywhere I 
suppose, including on the Ceph nodes themselves, but I’d recommend having it on 
the controllers.  Wherever you run it, you’d need a properly configured 
ceph.conf, and if you are using cephx authentication, you’d need the keyring 
files.  Your compute nodes would need that conf and keys also.

You can also run Ceph for nova ephemeral disks without Cinder at all.  You’d do 
that in nova.conf.

We use both at Overstock.  Ceph for nova ephemeral for general use, and also 
Ceph as one option in a multi-backend Cinder configuration.  We also use it for 
a Glance store, which is a fantastic option because it makes disk provisioning 
for Nova instant, since you’re essentially snapshotting and image RBD into an 
RBD for Nova/Cinder.

Mike Smith
Lead Cloud Systems Architect
Overstock.com



On Mar 7, 2016, at 10:04 PM, Martin Wilderoth 
mailto:martin.wilder...@linserv.se>> wrote:


Hello

Where should I run cinder-volume when i user ceph
on the controller ?
on the ceph mon or mds ?
or ?

Maybe it dosn't matter ?

Thanks in advance

Regards Martin
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] ceph and openstack

2016-03-07 Thread Erik McCormick
I run it on control nodes generally. You can also dedicate boxes to it if
you expect it to get extremely busy. I like to leave my ceph boxen to do
ceph only.

-Erik
On Mar 8, 2016 12:07 AM, "Martin Wilderoth" 
wrote:

>
> Hello
>
> Where should I run cinder-volume when i user ceph
> on the controller ?
> on the ceph mon or mds ?
> or ?
>
> Maybe it dosn't matter ?
>
> Thanks in advance
>
> Regards Martin
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] ceph and openstack

2016-03-07 Thread Martin Wilderoth
Hello

Where should I run cinder-volume when i user ceph
on the controller ?
on the ceph mon or mds ?
or ?

Maybe it dosn't matter ?

Thanks in advance

Regards Martin
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack