Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-05 Thread Jason Dillaman
I am not sure of its status -- it looks like it was part of 3.6 planning but it 
recently was moved to 4.0 on the wiki.  There is a video walkthrough of the 
running integration from this past August [1].  You would need to just deploy 
Cinder and Keystone -- no need for all the other bits.  Again, it appears that 
oVirt might even have some development to containerize a small Cinder/Glance 
OpenStack setup [2].

[1] https://www.youtube.com/watch?v=elEkGfjLITs
[2] http://www.ovirt.org/CinderGlance_Docker_Integration

-- 

Jason Dillaman 
Red Hat Ceph Storage Engineering 
dilla...@redhat.com 
http://www.redhat.com 


- Original Message - 

> From: "Gaetan SLONGO" <gslo...@it-optics.com>
> To: "Hugo Slabbert" <h...@slabnet.com>
> Cc: ceph-users@lists.ceph.com, "Somnath Roy" <somnath@sandisk.com>,
> "Jason Dillaman" <dilla...@redhat.com>
> Sent: Thursday, November 5, 2015 2:37:16 AM
> Subject: Re: [ceph-users] iSCSI over RDB is a good idea ?

> Thank you everybody for your interesting answers.

> I saw the Cinder integration in oVirt. Did someone already done that ? I
> don't know OpenStack (yet). Is it possible to deploy the Cinder component
> only without the complete OpenStack setup ?

> Thanks !

> - Original Message -

> De: "Hugo Slabbert" <h...@slabnet.com>
> À: "Somnath Roy" <somnath@sandisk.com>, "Jason Dillaman"
> <dilla...@redhat.com>, "Gaetan SLONGO" <gslo...@it-optics.com>
> Cc: ceph-users@lists.ceph.com
> Envoyé: Mercredi 4 Novembre 2015 23:30:56
> Objet: Re: RE: [ceph-users] iSCSI over RDB is a good idea ?

> > We are using SCST over RBD and not seeing much of a degradation...Need to
> > make sure you tune SCST properly and use multiple session..

> Sure. My post was not intended to say that iSCSI over RBD is *slow*, just
> that it scales differently than native RBD client access.

> If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs
> can saturate the 10G links, I have 100G of aggregate nominal throughput
> under ideal conditions. If I put an iSCSI target (or an active/passive pair
> of targets) in front of that to connect iSCSI initiators to RBD devices, my
> aggregate nominal throughput for iSCSI clients under ideal conditions is
> 10G.

> If you don't flat-top that, then it should perform just fine and the only hit
> should be the slight (possibly insignificant, depending on hardware and
> layout) latency bump from the extra hop.

> Don't get me wrong: I'm not trying to knock iSCSI over RBD at all. It's a
> perfectly legitimate and solid setup for connecting RBD-unaware clients into
> RBD storage. My intention was just to point out the difference in
> architecture and that sizing of the target hosts is a consideration that's
> different from a pure RBD environment.

> Though, I suppose if network utilization at the targets becomes an issue at
> any point, you could scale out with additional targets and balance the iSCSI
> clients across them.
> --
> Hugo
> h...@slabnet.com: email, xmpp/jabber
> also on Signal

>  From: Somnath Roy <somnath@sandisk.com> -- Sent: 2015-11-04 - 13:48
> 

> > We are using SCST over RBD and not seeing much of a degradation...Need to
> > make sure you tune SCST properly and use multiple session..
> >
> > Thanks & Regards
> > Somnath
> >
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Hugo Slabbert
> > Sent: Wednesday, November 04, 2015 1:44 PM
> > To: Jason Dillaman; Gaetan SLONGO
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] iSCSI over RDB is a good idea ?
> >
> >> The disadvantage of the iSCSI design is that it adds an extra hop between
> >> your VMs and the backing Ceph cluster.
> >
> > ...and introduces a bottleneck. iSCSI initiators are "dumb" in comparison
> > to native ceph/rbd clients. Whereas native clients will talk to all the
> > relevant OSDs directly, iSCSI initiators will just talk to the target
> > (unless there is some awesome magic in the RBD/tgt integration that I'm
> > unaware of). So the targets and their connectivity are a bottleneck.
> >
> > --
> > Hugo
> > h...@slabnet.com: email, xmpp/jabber
> > also on Signal
> >
> >

> --

> www.it-optics.com
> 
> Gaëtan SLONGO | IT & Project Manager
> Boulevard Initialis, 28 - 7000 Mons, BELGIUM
> Company : +32 (0)65 84 23 85
> Direct :  +32 (0)65 32 85 88
> Fax : +32 (0)65 84 66 76
> GPG Key : gslongo-gpg_key.asc
> 

> - Please consider your environmental responsibility before printing this
> e-mail -
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-05 Thread Lars Marowsky-Bree
On 2015-11-04T14:30:56, Hugo Slabbert  wrote:

> Sure. My post was not intended to say that iSCSI over RBD is *slow*, just 
> that it scales differently than native RBD client access.
> 
> If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs 
> can saturate the 10G links, I have 100G of aggregate nominal throughput under 
> ideal conditions. If I put an iSCSI target (or an active/passive pair of 
> targets) in front of that to connect iSCSI initiators to RBD devices, my 
> aggregate nominal throughput for iSCSI clients under ideal conditions is 10G.

It's worth noting that you can use multiple iSCSI target gateways using
MPIO, which allows you to scale the performance and availability
horizontally.

This doesn't help with the additional network/gateway hop, but the
bandwidth limitation is not the issue.

And that works today.


Regards,
Lars

-- 
Architect Storage/HA
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-04 Thread Hugo Slabbert
> The disadvantage of the iSCSI design is that it adds an extra hop between 
> your VMs and the backing Ceph cluster.

...and introduces a bottleneck. iSCSI initiators are "dumb" in comparison to 
native ceph/rbd
clients. Whereas native clients will talk to all the relevant OSDs directly, 
iSCSI initiators will just talk to the target (unless there is some awesome 
magic in the RBD/tgt integration that I'm unaware of). So the targets and their 
connectivity are a bottleneck.

--
Hugo
h...@slabnet.com: email, xmpp/jabber
also on Signal



signature.asc
Description: PGP/MIME digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-04 Thread Hugo Slabbert
> We are using SCST over RBD and not seeing much of a degradation...Need to 
> make sure you tune SCST properly and use multiple session..

Sure. My post was not intended to say that iSCSI over RBD is *slow*, just that 
it scales differently than native RBD client access.

If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs 
can saturate the 10G links, I have 100G of aggregate nominal throughput under 
ideal conditions. If I put an iSCSI target (or an active/passive pair of 
targets) in front of that to connect iSCSI initiators to RBD devices, my 
aggregate nominal throughput for iSCSI clients under ideal conditions is 10G.

If you don't flat-top that, then it should perform just fine and the only hit 
should be the slight (possibly insignificant, depending on hardware and layout) 
latency bump from the extra hop.

Don't get me wrong: I'm not trying to knock iSCSI over RBD at all. It's a 
perfectly legitimate and solid setup for connecting RBD-unaware clients into 
RBD storage. My intention was just to point out the difference in architecture 
and that sizing of the target hosts is a consideration that's different from a 
pure RBD environment.

Though, I suppose if network utilization at the targets becomes an issue at any 
point, you could scale out with additional targets and balance the iSCSI 
clients across them.
--
Hugo
h...@slabnet.com: email, xmpp/jabber
also on Signal

 From: Somnath Roy <somnath@sandisk.com> -- Sent: 2015-11-04 - 13:48 


> We are using SCST over RBD and not seeing much of a degradation...Need to 
> make sure you tune SCST properly and use multiple session..
>
> Thanks & Regards
> Somnath
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Hugo 
> Slabbert
> Sent: Wednesday, November 04, 2015 1:44 PM
> To: Jason Dillaman; Gaetan SLONGO
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] iSCSI over RDB is a good idea ?
>
>> The disadvantage of the iSCSI design is that it adds an extra hop between 
>> your VMs and the backing Ceph cluster.
>
> ...and introduces a bottleneck. iSCSI initiators are "dumb" in comparison to 
> native ceph/rbd clients. Whereas native clients will talk to all the relevant 
> OSDs directly, iSCSI initiators will just talk to the target (unless there is 
> some awesome magic in the RBD/tgt integration that I'm unaware of). So the 
> targets and their connectivity are a bottleneck.
>
> --
> Hugo
> h...@slabnet.com: email, xmpp/jabber
> also on Signal
>
>




signature.asc
Description: PGP/MIME digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-04 Thread Jason Dillaman
At this time, it appears that oVirt is pursuing a strategy of Ceph/RBD 
integration by proxy of OpenStack.  There is a feature scheduled for oVirt 4 to 
integrate with Cinder, specifically calling-out the RBD use-case [1].

On the iSCSI front, RBD should be functional with STGT and LIO in 
active/passive mode.  There is active development to add support for RBD+LIO 
active/active [2].  The disadvantage of the iSCSI design is that it adds an 
extra hop between your VMs and the backing Ceph cluster.

[1] http://www.ovirt.org/Features/Cinder_Integration
[2] http://tracker.ceph.com/projects/ceph/wiki/Clustered_SCSI_target_using_RBD

-- 

Jason Dillaman 


- Original Message -
> From: "Gaetan SLONGO" 
> To: ceph-users@lists.ceph.com
> Sent: Tuesday, November 3, 2015 10:00:59 AM
> Subject: [ceph-users] iSCSI over RDB is a good idea ?
> 
> Dear Ceph users,
> 
> We are currently working on design of virtualization infrastructure using
> oVirt and we would like to use Ceph.
> 
> The problem is, at this time there is no native integration of Ceph in oVirt.
> One possibility is to export RBD devices over iSCSI (maybe you have better
> one?).
> 
> I've saw this post http://ceph.com/dev-notes/adding-support-for-rbd-to-stgt/
> but this seems to be deprecated on rhel7... Someone already did this on
> rhel/centos 7 with targetd (or other) ? There is performance issues ?
> 
> Thank you for advance !
> 
> Best regards,
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-04 Thread Somnath Roy
We are using SCST over RBD and not seeing much of a degradation...Need to make 
sure you tune SCST properly and use multiple session..

Thanks & Regards
Somnath

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Hugo 
Slabbert
Sent: Wednesday, November 04, 2015 1:44 PM
To: Jason Dillaman; Gaetan SLONGO
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] iSCSI over RDB is a good idea ?

> The disadvantage of the iSCSI design is that it adds an extra hop between 
> your VMs and the backing Ceph cluster.

...and introduces a bottleneck. iSCSI initiators are "dumb" in comparison to 
native ceph/rbd clients. Whereas native clients will talk to all the relevant 
OSDs directly, iSCSI initiators will just talk to the target (unless there is 
some awesome magic in the RBD/tgt integration that I'm unaware of). So the 
targets and their connectivity are a bottleneck.

--
Hugo
h...@slabnet.com: email, xmpp/jabber
also on Signal

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-04 Thread Gaetan SLONGO
Dear Somnath Roy, 

Thank you for your answer. What king of tuning do your recommend ? 

Thanks ! 

Best regards, 

- Mail original -

De: "Somnath Roy" <somnath@sandisk.com> 
À: "Hugo Slabbert" <h...@slabnet.com>, "Jason Dillaman" <dilla...@redhat.com>, 
"Gaetan SLONGO" <gslo...@it-optics.com> 
Cc: ceph-users@lists.ceph.com 
Envoyé: Mercredi 4 Novembre 2015 22:48:27 
Objet: RE: [ceph-users] iSCSI over RDB is a good idea ? 

We are using SCST over RBD and not seeing much of a degradation...Need to make 
sure you tune SCST properly and use multiple session.. 

Thanks & Regards 
Somnath 

-Original Message- 
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Hugo 
Slabbert 
Sent: Wednesday, November 04, 2015 1:44 PM 
To: Jason Dillaman; Gaetan SLONGO 
Cc: ceph-users@lists.ceph.com 
Subject: Re: [ceph-users] iSCSI over RDB is a good idea ? 

> The disadvantage of the iSCSI design is that it adds an extra hop between 
> your VMs and the backing Ceph cluster. 

...and introduces a bottleneck. iSCSI initiators are "dumb" in comparison to 
native ceph/rbd clients. Whereas native clients will talk to all the relevant 
OSDs directly, iSCSI initiators will just talk to the target (unless there is 
some awesome magic in the RBD/tgt integration that I'm unaware of). So the 
targets and their connectivity are a bottleneck. 

-- 
Hugo 
h...@slabnet.com: email, xmpp/jabber 
also on Signal 




-- 




www.it-optics.com 

Gaëtan SLONGO | IT & Project Manager 
Boulevard Initialis, 28 - 7000 Mons, BELGIUM 
Company :   +32 (0)65 84 23 85 
Direct :+32 (0)65 32 85 88 
Fax :   +32 (0)65 84 66 76 
GPG Key :   gslongo-gpg_key.asc 



- Please consider your environmental responsibility before printing this e-mail 
- 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-04 Thread Gaetan SLONGO
Thank you everybody for your interesting answers. 

I saw the Cinder integration in oVirt. Did someone already done that ? I don't 
know OpenStack (yet). Is it possible to deploy the Cinder component only 
without the complete OpenStack setup ? 

Thanks ! 

- Mail original -

De: "Hugo Slabbert" <h...@slabnet.com> 
À: "Somnath Roy" <somnath@sandisk.com>, "Jason Dillaman" 
<dilla...@redhat.com>, "Gaetan SLONGO" <gslo...@it-optics.com> 
Cc: ceph-users@lists.ceph.com 
Envoyé: Mercredi 4 Novembre 2015 23:30:56 
Objet: Re: RE: [ceph-users] iSCSI over RDB is a good idea ? 

> We are using SCST over RBD and not seeing much of a degradation...Need to 
> make sure you tune SCST properly and use multiple session.. 

Sure. My post was not intended to say that iSCSI over RBD is *slow*, just that 
it scales differently than native RBD client access. 

If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs 
can saturate the 10G links, I have 100G of aggregate nominal throughput under 
ideal conditions. If I put an iSCSI target (or an active/passive pair of 
targets) in front of that to connect iSCSI initiators to RBD devices, my 
aggregate nominal throughput for iSCSI clients under ideal conditions is 10G. 

If you don't flat-top that, then it should perform just fine and the only hit 
should be the slight (possibly insignificant, depending on hardware and layout) 
latency bump from the extra hop. 

Don't get me wrong: I'm not trying to knock iSCSI over RBD at all. It's a 
perfectly legitimate and solid setup for connecting RBD-unaware clients into 
RBD storage. My intention was just to point out the difference in architecture 
and that sizing of the target hosts is a consideration that's different from a 
pure RBD environment. 

Though, I suppose if network utilization at the targets becomes an issue at any 
point, you could scale out with additional targets and balance the iSCSI 
clients across them. 
-- 
Hugo 
h...@slabnet.com: email, xmpp/jabber 
also on Signal 

 From: Somnath Roy <somnath@sandisk.com> -- Sent: 2015-11-04 - 13:48 
 

> We are using SCST over RBD and not seeing much of a degradation...Need to 
> make sure you tune SCST properly and use multiple session.. 
> 
> Thanks & Regards 
> Somnath 
> 
> -Original Message- 
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Hugo 
> Slabbert 
> Sent: Wednesday, November 04, 2015 1:44 PM 
> To: Jason Dillaman; Gaetan SLONGO 
> Cc: ceph-users@lists.ceph.com 
> Subject: Re: [ceph-users] iSCSI over RDB is a good idea ? 
> 
>> The disadvantage of the iSCSI design is that it adds an extra hop between 
>> your VMs and the backing Ceph cluster. 
> 
> ...and introduces a bottleneck. iSCSI initiators are "dumb" in comparison to 
> native ceph/rbd clients. Whereas native clients will talk to all the relevant 
> OSDs directly, iSCSI initiators will just talk to the target (unless there is 
> some awesome magic in the RBD/tgt integration that I'm unaware of). So the 
> targets and their connectivity are a bottleneck. 
> 
> -- 
> Hugo 
> h...@slabnet.com: email, xmpp/jabber 
> also on Signal 
> 
> 





-- 




www.it-optics.com 

Gaëtan SLONGO | IT & Project Manager 
Boulevard Initialis, 28 - 7000 Mons, BELGIUM 
Company :   +32 (0)65 84 23 85 
Direct :+32 (0)65 32 85 88 
Fax :   +32 (0)65 84 66 76 
GPG Key :   gslongo-gpg_key.asc 



- Please consider your environmental responsibility before printing this e-mail 
- 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com