> We are using SCST over RBD and not seeing much of a degradation...Need to 
> make sure you tune SCST properly and use multiple session..

Sure. My post was not intended to say that iSCSI over RBD is *slow*, just that 
it scales differently than native RBD client access.

If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs 
can saturate the 10G links, I have 100G of aggregate nominal throughput under 
ideal conditions. If I put an iSCSI target (or an active/passive pair of 
targets) in front of that to connect iSCSI initiators to RBD devices, my 
aggregate nominal throughput for iSCSI clients under ideal conditions is 10G.

If you don't flat-top that, then it should perform just fine and the only hit 
should be the slight (possibly insignificant, depending on hardware and layout) 
latency bump from the extra hop.

Don't get me wrong: I'm not trying to knock iSCSI over RBD at all. It's a 
perfectly legitimate and solid setup for connecting RBD-unaware clients into 
RBD storage. My intention was just to point out the difference in architecture 
and that sizing of the target hosts is a consideration that's different from a 
pure RBD environment.

Though, I suppose if network utilization at the targets becomes an issue at any 
point, you could scale out with additional targets and balance the iSCSI 
clients across them.
--
Hugo
h...@slabnet.com: email, xmpp/jabber
also on Signal

---- From: Somnath Roy <somnath....@sandisk.com> -- Sent: 2015-11-04 - 13:48 
----

> We are using SCST over RBD and not seeing much of a degradation...Need to 
> make sure you tune SCST properly and use multiple session..
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Hugo 
> Slabbert
> Sent: Wednesday, November 04, 2015 1:44 PM
> To: Jason Dillaman; Gaetan SLONGO
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] iSCSI over RDB is a good idea ?
>
>> The disadvantage of the iSCSI design is that it adds an extra hop between 
>> your VMs and the backing Ceph cluster.
>
> ...and introduces a bottleneck. iSCSI initiators are "dumb" in comparison to 
> native ceph/rbd clients. Whereas native clients will talk to all the relevant 
> OSDs directly, iSCSI initiators will just talk to the target (unless there is 
> some awesome magic in the RBD/tgt integration that I'm unaware of). So the 
> targets and their connectivity are a bottleneck.
>
> --
> Hugo
> h...@slabnet.com: email, xmpp/jabber
> also on Signal
>
>


Attachment: signature.asc
Description: PGP/MIME digital signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to