Hi David!!

Thanks a lot for your answer. But what happens when you have... imagine two 
monitors or more and one of them becomes unreponsive?. Another one is used 
after a timeout or... what happens when a client wants to access to some data, 
needs to query for that (for knowing where the info is) a monitor and does not 
answer?. A monitor that becomes not responsive is discarded for the following 
queries of where the data exists in the cluster?.

So saying in some way... you wont use when talking in terms of performance any 
kind of solution not accessing through librbd?. Is the performance poor or bad 
when using /dev/rbdX devices mounted?. Or perhaps you say in terms of data 
integrity?.

I was planning to use Xen with Cepth but after your advine ... 😀. Would you 
definitively to with KVM?.

Thanks a lot again 😉
Chefs,


Egoitz,

> El 13 feb 2018, a las 20:19, David Turner <drakonst...@gmail.com> escribió:
> 
> Monitors are not required for accessing data from the Ceph cluster.  Clients 
> will ask a monitor for a current OSD map and then use that OSD map to 
> communicate with the OSDs directly for all reads and writes.  The map 
> includes the crush map which has all of the information a client needs to 
> know where every object is in the cluster.  Having 3 mons is a good number 
> for small deployments.  5 mons is better for better redundancy in the monitor 
> quorum.  Avoid an even number of mons always.
> 
> librbd is definitely the way to go for accessing RBDs for a hypervisor as 
> opposed to fuse or krbd.  For a quick and easy hypervisor using Ceph, I like 
> Proxmox.  It natively has the ability to use KVM with Ceph without having to 
> configure it yourself.  It comes with a nice gui as well to see the console 
> screen for your VMs.  It also has a fairly simple guide to cluster 
> hypervisors together to provide HA support for your VMs.  For larger scale VM 
> deployments, Openstack is probably the way I would go.
> 
>> On Tue, Feb 13, 2018 at 2:11 PM Egoitz Aurrekoetxea <ego...@sarenet.es> 
>> wrote:
>> Good afternoon,
>> 
>> As I'm new to Ceph I was wondering what could be the most proper way to
>> use it with Xen hypervisor (with a plain Linux installation, Centos, for
>> instance). Have read the less proper one is to just
>> mount the /dev/rbdX device in a mount point and just showing that space
>> to the Hypervisor but I see it pretty easy and seems stable. Seems not
>> to perform bad... Is it better to use for instance librbd
>> with KVM?. Does it perform better?.
>> 
>> By the way, it seems to use the monitor node in order to access to the
>> space in the osd cluster. Have read too that Ceph has been designed
>> keeping in mind no single points of failure but... is it possible
>> to configure several monitor nodes, and then after a very little timeout
>> or similar to access to the file system through the other nodes?. What
>> could be the most proper way of configuring this for avoiding a
>> machine to loose the storage if the monitor fails?. Could you point
>> please me in the right direction?. Perhaps with several monitors or....
>> 
>> By the way if you could consider it would be better to use another
>> hypervisor or config (with librados or whatever) with Ceph, could you
>> please suggest me too?. Help to the newbie :p :) :)
>> 
>> Best regards,
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to