Hi, Ariel, gentlemen,

I have the same question but with regard to multipath. Is it possible to
just export iSCSI target on each Ceph node and use a multipath on client
side?
Can it possibly lead to data inconsistency?

Regards, Vasily.


On Fri, May 22, 2015 at 12:59 PM, Gerson Ariel <ar...@bisnis2030.com> wrote:

> I apologize beforehand for not using more descriptive subject for my
> question.
>
>
>
> On Fri, May 22, 2015 at 4:55 PM, Gerson Ariel <ar...@bisnis2030.com>
> wrote:
>
>> Our hardware is like this, three identical servers with 8 osd disks, 1
>> ssd disk
>> as journal, 1 for os, 32GB of ECC RAM, 4 GiB copper ethernet. We deploy
>> this
>> cluster since February 2015 and most of the the system load is not too
>> great,
>> lots of idle time.
>>
>> Right now we have a node that mounts rbd blocks and export them as nfs. It
>> works quite well but at a cost of one extra node as bridge between storage
>> client (vms) and storage provider  cluster (ceph osd and mon).
>>
>> What I want to know is, is there any reason why I shouldn't mount rbd
>> disks on
>> one of the server, the ones that also runs OSD and MON daemons, and
>> export them
>> as nfs or iSCSI?  Assuming that I already done my homework to make my
>> setup
>> highly available using pacemaker (eg. floating IP, iSCSI/NFS resource),
>> isn't
>> something like this would be better as it is more reliable? ie. I remove
>> the
>> middle-man node(s) so I only have to make sure about those ceph nodes and
>> vm-hosts.
>>
>> Thank you
>>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to