We are using Ceph+RBD+NFS under pacemaker for VMware.  We are doing iSCSI using 
SCST but have not used it against VMware, just Solaris and Hyper-V.

It generally works and performs well enough – the biggest issues are the 
clustering for iSCSI ALUA support and NFS failover, most of which we have 
developed in house – we still have not quite got that right yet.



From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Daniel 
K
Sent: Saturday, 3 March 2018 1:03 AM
To: Joshua Chen <csc...@asiaa.sinica.edu.tw>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph iSCSI is a prank?

There's been quite a few VMWare/Ceph threads on the mailing list in the past.

One setup I've been toying with is a linux guest running on the vmware host on 
local storage, with the guest mounting a ceph RBD with a filesystem on it, then 
exporting that via NFS to the VMWare host as a datastore.

Exporting CephFS via NFS to Vmware is another option.

I'm not sure how well shared storage will work with either of these 
configurations. but they work fairly well for single-host deployments.

There are also quite a few products that do support iscsi on ceph. Suse 
Enterprise Storage is a commercial one, PetaSAN is an open-source option.


On Fri, Mar 2, 2018 at 2:24 AM, Joshua Chen 
<csc...@asiaa.sinica.edu.tw<mailto:csc...@asiaa.sinica.edu.tw>> wrote:
Dear all,
  I wonder how we could support VM systems with ceph storage (block device)? my 
colleagues are waiting for my answer for vmware (vSphere 5) and I myself use 
oVirt (RHEV). the default protocol is iSCSI.
  I know that openstack/cinder work well with ceph and proxmox (just heard) 
too. But currently we are using vmware and ovirt.


Your wise suggestion is appreciated

Cheers
Joshua


On Thu, Mar 1, 2018 at 3:16 AM, Mark Schouten 
<m...@tuxis.nl<mailto:m...@tuxis.nl>> wrote:
Does Xen still not support RBD? Ceph has been around for years now!
Met vriendelijke groeten,

--
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/
Mark Schouten | Tuxis Internet Engineering
KvK: 61527076 | http://www.tuxis.nl/
T: 0318 200208 | i...@tuxis.nl<mailto:i...@tuxis.nl>


Van: Massimiliano Cuttini <m...@phoenixweb.it<mailto:m...@phoenixweb.it>>
Aan: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>" 
<ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>>
Verzonden: 28-2-2018 13:53
Onderwerp: [ceph-users] Ceph iSCSI is a prank?

I was building ceph in order to use with iSCSI.
But I just see from the docs that need:

CentOS 7.5
(which is not available yet, it's still at 7.4)
https://wiki.centos.org/Download

Kernel 4.17
(which is not available yet, it is still at 4.15.7)
https://www.kernel.org/

So I guess, there is no ufficial support and this is just a bad prank.

Ceph is ready to be used with S3 since many years.
But need the kernel of the next century to works with such an old technology 
like iSCSI.
So sad.






_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Confidentiality: This email and any attachments are confidential and may be 
subject to copyright, legal or some other professional privilege. They are 
intended solely for the attention and use of the named addressee(s). They may 
only be copied, distributed or disclosed with the consent of the copyright 
owner. If you have received this email by mistake or by breach of the 
confidentiality clause, please notify the sender immediately by return email 
and delete or destroy all copies of the email. Any confidentiality, privilege 
or copyright is not waived or lost because this email has been sent to you by 
mistake.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to