We are using the iSCSI gateway in ceph-12.2 with vsphere-6.5 as the client.
It's an active/passive setup, per. LUN.
We choose this solution because that's what we could get RH support for and it 
sticks to the "no SPOF" philosophy.

Performance is ~25-30% slower then krbd mounting the same rbd image directly. 
This is based on the following.
We spun up a FC27 VM within the vmware cluster attached a vdisk from the vmware 
datastore, ran various fio test.
Then we mapped the same rbd image directly and ran the same tests(ofc we 
removed the iscsi exposure first.) 

Regards
Heðin Ejdesgaard


On mán, 2018-05-28 at 15:47 -0500, Brady Deetz wrote:
> You might look into open vstorage as a gateway into ceph. 
> 
> On Mon, May 28, 2018, 2:42 PM Steven Vacaroaia <ste...@gmail.com> wrote:
> > Hi,
> > 
> > I need to design and build a storage platform that will be "consumed" 
> > mainly by VMWare 
> > 
> > CEPH is my first choice 
> > 
> > As far as I can see, there are 3 ways CEPH storage can be made available to 
> > VMWare 
> > 
> > 1. iSCSI
> > 2. NFS-Ganesha
> > 3. mounted rbd to a lInux NFS server
> > 
> > Any suggestions / advice as to which one is better ( and why) as well as 
> > links to doumentation/best practices will
> > be truly appreciated 
> > 
> > Thanks
> > Steven
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to