Hi Yaniv,

> Il giorno 18 dic 2016, alle ore 17:37, Yaniv Kaul <yk...@redhat.com> ha 
> scritto:
> 
> 
> 
>> On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo 
>> <alessandro.desa...@roma1.infn.it> wrote:
>> Hi,
>> having a 3-node ceph cluster is the bare minimum you can have to make it 
>> working, unless you want to have just a replica-2 mode, which is not safe.
> 
> How well does it perform?

One if the ceph clusters we use had exactly this setup: 3 DELL R630 (ceph 
jewel), 6 1TB NL-SAS disks so 3 mons, 6 osds. We bound the cluster network to a 
dedicated interface, 1Gbps. I can say it works pretty well, the performance 
reaches up to 100MB/s per rbd device, which is the expected maximum for the 
network connection. Resiliency is also pretty good, we can loose 2 osds (I.e. a 
full machine) without impacting on the performance.

>  
>> It's not true that ceph is not easy to configure, you might use very easily 
>> ceph-deploy, have puppet configuring it or even run it in containers. Using 
>> docker is in fact the easiest solution, it really requires 10 minutes to 
>> make a cluster up. I've tried it both with jewel (official containers) and 
>> kraken (custom containers), and it works pretty well.
> 
> This could be a great blog post in ovirt.org site - care to write something 
> describing the configuration and setup?

Oh sure, if it may be of general interest I'll be glad to. How can I do it? :-)
Cheers,

   Alessandro 

> Y.
>  
>> The real problem is not creating and configuring a ceph cluster, but using 
>> it from ovirt, as it requires cinder, i.e. a minimal setup of openstack. We 
>> have it and it's working pretty well, but it requires some work. For your 
>> reference we have cinder running on an ovirt VM using gluster.
>> Cheers,
>> 
>>    Alessandro 
>> 
>>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <yk...@redhat.com> ha 
>>> scritto:
>>> 
>>> 
>>> 
>>>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpa...@gmail.com> wrote:
>>>> ​Dear Team,
>>>> 
>>>> We are using Ovirt 4.0 for POC what we are doing I want to check with all 
>>>> Guru's Ovirt.
>>>> 
>>>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB 
>>>> SSD.
>>>> 
>>>> Waht we are done we have install ovirt hyp on these h/w and we have 
>>>> physical server where we are running our manager for ovirt. For ovirt hyp 
>>>> we are using only one 500GB of one HDD rest we have kept for ceph, so we 
>>>> have 3 node as guest running on ovirt and for ceph. My question you all is 
>>>> what I am doing is right or wrong.
>>> 
>>> I think Ceph requires a lot more resources than above. It's also a bit more 
>>> challenging to configure. I would highly recommend a 3-node cluster with 
>>> Gluster.
>>> Y.
>>>  
>>>> 
>>>> Regards
>>>> Rajat​
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users@ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>> 
>>> 
>>> _______________________________________________
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
> 
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to