[ceph-users] Ceph Thin Provisioning on OpenStack Instances

2016-03-31 Thread Mario Codeniera
Hi,

Is there anyone done thin provisioning on OpenStack instances (virtual
machine)? Based on the current configurations, it works well with my cloud
using ceph 0.94.5 with SSD journal (from 18mins to around 7 mins for
creating an 40GB instance and not good SSD iops). But what I wanted is the
storage space as it copied the whole image from Glance (40GB) to each newly
created virtual machine, is there any chances that it will copy only the
top changes? somewhat like a vmware-like snapshot, but still the base image
is still there.

Current setup:
xxx --> (uploaded glance image, say Centos 7 with 40GB)

if create an instance,
xxx + yyy  where yyy is the new changes
(40GB + MB/GB changes)


*Plan setup: *
(it will save storage as it will not copy xxx)
*yyy* is only stored on the ceph


As per testing on current cloud, the OpenStack snapshot still copy the
whole image + new changes. Correct me if wrong as still using the Kilo
release (2015.1.1) or maybe it was a misconfiguration? And the more I added
the users, the more OSDs will be added too.

Any insights are highly appreciated.


Thanks,
Mario
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph with SSD and HDD mixed

2015-07-21 Thread Mario Codeniera
Hi Johannes,

Thanks for your reply.

I am naive for this, no idea how to make a configurations or where I can
starts? based on the 4 options mentioned.
Hope you can expound it further if possible.

Best regards,
Mario





On Tue, Jul 21, 2015 at 2:44 PM, Johannes Formann  wrote:

> Hi,
>
> > Can someone give an insights, if it possible to mixed SSD with HDD? on
> the OSD.
>
> you’ll have more or less four options:
>
> - SSDs for the journals of the OSD-process (SSD must be able to perform
> good on synchronous writes)
> - an SSD only pool for „high performance“ data
> - Using SSDs for the primary copy (fast reads), can be combined with the
> first
> - Using a cache pool with an SSD-only pool in front of the main disk-pool
>
> > How can we speed up the uploading for file for example, as per
> experience it took around 18mins to load 20Gb images (via glance), in 1Gb
> network. Or it is just normal?
>
> That’s about 20MB/s, for (I guess) sequential writes on a disk only
> cluster that’s ok. But you can improve that with SSDs, but you have to
> choose the best option for your setup, depending on the expected workload.
>
> greetings
>
> Johannes
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph with SSD and HDD mixed

2015-07-20 Thread Mario Codeniera
Hi,

Can someone give an insights, if it possible to mixed SSD with HDD? on the
OSD.

How can we speed up the uploading for file for example, as per experience
it took around 18mins to load 20Gb images (via glance), in 1Gb network. Or
it is just normal?


Regards,
Mario
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Nova with Ceph generate error

2015-07-09 Thread Mario Codeniera
Hi,

It is my first time here. I am just having an issue regarding with my
configuration with the OpenStack which works perfectly for the cinder and
the glance based on Kilo release in CentOS 7. I am based my documentation
on this  rbd-opeenstack
manual.


If I enable my rbd in the nova.conf it generates error like the following
in the dashboard as the logs don't have any errors:

Internal Server Error (HTTP 500) (Request-ID:
> req-231347dd-f14c-4f97-8a1d-851a149b037c)
> Code
> 500
> Details
> File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 343,
> in decorated_function return function(self, context, *args, **kwargs) File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2737, in
> terminate_instance do_terminate_instance(instance, bdms) File
> "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445,
> in inner return f(*args, **kwargs) File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2735, in
> do_terminate_instance self._set_instance_error_state(context, instance)
> File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in
> __exit__ six.reraise(self.type_, self.value, self.tb) File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2725, in
> do_terminate_instance self._delete_instance(context, instance, bdms,
> quotas) File "/usr/lib/python2.7/site-packages/nova/hooks.py", line 149, in
> inner rv = f(*args, **kwargs) File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2694, in
> _delete_instance quotas.rollback() File
> "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in
> __exit__ six.reraise(self.type_, self.value, self.tb) File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2664, in
> _delete_instance self._shutdown_instance(context, instance, bdms) File
> "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2604, in
> _shutdown_instance self.volume_api.detach(context, bdm.volume_id) File
> "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 214, in
> wrapper res = method(self, ctx, volume_id, *args, **kwargs) File
> "/usr/lib/python2.7/site-packages/nova/volume/cinder.py", line 365, in
> detach cinderclient(context).volumes.detach(volume_id) File
> "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 334, in
> detach return self._action('os-detach', volume) File
> "/usr/lib/python2.7/site-packages/cinderclient/v2/volumes.py", line 311, in
> _action return self.api.client.post(url, body=body) File
> "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 91, in post
> return self._cs_request(url, 'POST', **kwargs) File
> "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 85, in
> _cs_request return self.request(url, method, **kwargs) File
> "/usr/lib/python2.7/site-packages/cinderclient/client.py", line 80, in
> request return super(SessionClient, self).request(*args, **kwargs) File
> "/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 206, in
> request resp = super(LegacyJsonAdapter, self).request(*args, **kwargs) File
> "/usr/lib/python2.7/site-packages/keystoneclient/adapter.py", line 95, in
> request return self.session.request(url, method, **kwargs) File
> "/usr/lib/python2.7/site-packages/keystoneclient/utils.py", line 318, in
> inner return func(*args, **kwargs) File
> "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 397, in
> request raise exceptions.from_response(resp, method, url)
> Created
> 10 Jul 2015, 4:40 a.m.
>


Again if disable I able to work but it is generated on the compute node, as
I observe too it doesn't display the hypervisor of the compute nodes, or
maybe it is related.

It was working on Juno before, but there are unexpected rework as the
network infrastructure was change which the I rerun the script and found
lots of conflicts et al as I run before using qemu-img-rhev qemu-kvm-rhev
from OVirt but seems the new hammer (Ceph repository) solve the issue.

Hope someone can enlighten.

Thanks,
Mario
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com