Re: [ceph-users] Seagate Kinetic

2013-10-29 Thread John Spray
The cost of the chassis component[1] is likely to influence totals a fair
bit.  I notice that in their reference design there are only two 10Gb ports
for 60 drives -- this would be the cheap bulk storage option, if you had a
bandwidth-conscious application you'd be looking at more expensive 10Gb
ports per chassis.

The demands on the top of rack network would presumably be high, as all the
replication has to be driven from the client side, rather than happening
p2p between storage servers.  Compared with a Ceph cluster doing
replication on a separate backend network, a Kinetic-based app with N way
replication would require a factor of N more bandwidth on the (probably
expensive) network between the storage racks and the clients.

I'll certainly be following with interest, but I'm very sceptical about
cost benefits until I see an overall system including the application-level
redundancy, the chassis and the networking.  The drive cost might vanish in
the noise once we see how heavily an application would hit the ToR network
on a system like this (e.g. imagine recovering from a drive failure,
clients are going to have to eat ToR bandwidth to do recovery too).  Could
be a lucky break for switch vendors :-)

John

1. https://developers.seagate.com/display/KV/Kinetic+Deployment+Chassis

On Tue, Oct 29, 2013 at 11:23 AM,  wrote:

>
> That's unfortunate; hopefully 2nd-gens will improve and open things up.
>
> Some numbers:
>
> - Commercial grid-style SAN is maybe £1.70 per usable GB
> - Ceph cluster of about 1PB built on Dell hardware is maybe £1.25 per
> usable GB
> - Bare drives like WD RE4 3TB are about £0.21/GB (assuming 1/3rd capacity
> ends up usable)
>
> So if Ethernet hybrid drives could be 2x or 3x the price of standard
> block, so cluster cost could be halved :)
>
> It'd be interesting to know what £/GB (or $/GB) others have achieved with
> their Ceph implementations.
>
>
>
> On 2013-10-28 15:50, Gregory Farnum wrote:
>
>> On Monday, October 28, 2013, wrote:
>>
>> Kinetic is interesting, but I think it's going to find more uptake
>> among big Open Compute users like Facebook than in general distributed
>> storage systems. In particular, these drives don't appear to have the
>> CPU power required to run OSDs, and their native interfaces don't have
>> the strength to be useful underneath.
>> -Greg
>>
>> --
>> Software Engineer #42 @ http://inktank.com [3] | http://ceph.com [4]
>>
>
> __**_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Seagate Kinetic

2013-10-29 Thread Blair Bethwaite
Hi James,

> Message: 2
> Date: Tue, 29 Oct 2013 11:23:14 +
> From: ja...@peacon.co.uk
> To: Gregory Farnum 
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Seagate Kinetic
> Message-ID: <81dbc7ae324ac5bc6afd85aef080f...@peacon.co.uk>
> Content-Type: text/plain; charset=UTF-8; format=flowed

> - Commercial grid-style SAN is maybe ?1.70 per usable GB

I wouldn't know, but for that money I'd be expecting very high-performance
and all the service trimmings.

> - Ceph cluster of about 1PB built on Dell hardware is maybe ?1.25 per
> usable GB

That sounds a bit steep. I'm working on Dell Ceph configs at the moment,
whilst I know we're getting decent discounts they aren't out of the
ordinary for medium-large volumes - we're not even buying a PB up-front.

For a straight disk setup (4TB NL-SAS) on an Intel Ivy platform including
PERC 710p we're looking at about ~AU0.30c per usable GB.

For a 1:2 (200GB DCS3700 SSD : 4TB NL-SAS) setup it's under ~AU0.60c.

And erasure coding is coming (so usable capacity goes up)!

--
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Seagate Kinetic

2013-10-29 Thread james


That's unfortunate; hopefully 2nd-gens will improve and open things up.

Some numbers:

- Commercial grid-style SAN is maybe £1.70 per usable GB
- Ceph cluster of about 1PB built on Dell hardware is maybe £1.25 per 
usable GB
- Bare drives like WD RE4 3TB are about £0.21/GB (assuming 1/3rd 
capacity ends up usable)


So if Ethernet hybrid drives could be 2x or 3x the price of standard 
block, so cluster cost could be halved :)


It'd be interesting to know what £/GB (or $/GB) others have achieved 
with their Ceph implementations.



On 2013-10-28 15:50, Gregory Farnum wrote:

On Monday, October 28, 2013, wrote:

Kinetic is interesting, but I think it's going to find more uptake
among big Open Compute users like Facebook than in general 
distributed

storage systems. In particular, these drives don't appear to have the
CPU power required to run OSDs, and their native interfaces don't 
have

the strength to be useful underneath.
-Greg

--
Software Engineer #42 @ http://inktank.com [3] | http://ceph.com [4]


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Seagate Kinetic

2013-10-28 Thread Gregory Farnum
On Monday, October 28, 2013, wrote:

> Not brand-new, but I've not seen it mentioned on here so far.  Seagate
> Kinetic essentially enables HDDs to present themselves directly over
> Ethernet as Swift object storage:
>
> http://www.seagate.com/**solutions/cloud/data-center-**
> cloud/platforms/?cmpid=**friendly-_-pr-kinetic-us
>
> If the CPUs on these drives have enough oomph for Swift, what about Ceph
> OSDs?
>
> Add in some DHCP option based auto-configure mechanism and a small SLC SSD
> in each drive (like hybrid drives; Kinetic graphics hint at this already:
> http://www.seagate.com/www-**content/ti-dm/_shared/images/**
> figure-3-drive-application-**management-storage-software-**api-732x642.png)
> so we could also eliminate the storage server layer, get smaller failure
> domains, and solve the journalling problem - and ultimately reduce cost and
> complexity.  Basically build a rack with hundreds of hot-plug Ethernet HDD
> ports...
>
> Forgive me, I'm just thinking out-loud...
>

Kinetic is interesting, but I think it's going to find more uptake among
big Open Compute users like Facebook than in general distributed storage
systems. In particular, these drives don't appear to have the CPU
power required to run OSDs, and their native interfaces don't have the
strength to be useful underneath.
-Greg


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Seagate Kinetic

2013-10-28 Thread Patrick McGarry
Well, as I understand it Seagate has their own home-rolled thing.  I
believe there was some discussion at one point about using Ceph
together with their offering, but if I remember correctly Seagate
wanted to remove RADOS and just use Ceph clients, which didn't make a
lot of sense to us.



Best Regards,

Patrick McGarry
Director, Community || Inktank
http://ceph.com  ||  http://inktank.com
@scuttlemonkey || @ceph || @inktank


On Mon, Oct 28, 2013 at 1:02 PM, Hunter Nield  wrote:
> I've been wondering about the same thing.
>
> Has anyone had a chance to look at the Simulator?
> https://github.com/Seagate/Kinetic-Preview
>
>
> On Mon, Oct 28, 2013 at 5:56 PM,  wrote:
>>
>> Not brand-new, but I've not seen it mentioned on here so far.  Seagate
>> Kinetic essentially enables HDDs to present themselves directly over
>> Ethernet as Swift object storage:
>>
>>
>> http://www.seagate.com/solutions/cloud/data-center-cloud/platforms/?cmpid=friendly-_-pr-kinetic-us
>>
>> If the CPUs on these drives have enough oomph for Swift, what about Ceph
>> OSDs?
>>
>> Add in some DHCP option based auto-configure mechanism and a small SLC SSD
>> in each drive (like hybrid drives; Kinetic graphics hint at this already:
>> http://www.seagate.com/www-content/ti-dm/_shared/images/figure-3-drive-application-management-storage-software-api-732x642.png)
>> so we could also eliminate the storage server layer, get smaller failure
>> domains, and solve the journalling problem - and ultimately reduce cost and
>> complexity.  Basically build a rack with hundreds of hot-plug Ethernet HDD
>> ports...
>>
>> Forgive me, I'm just thinking out-loud...
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Seagate Kinetic

2013-10-28 Thread Hunter Nield
I've been wondering about the same thing.

Has anyone had a chance to look at the Simulator?
https://github.com/Seagate/Kinetic-Preview


On Mon, Oct 28, 2013 at 5:56 PM,  wrote:

> Not brand-new, but I've not seen it mentioned on here so far.  Seagate
> Kinetic essentially enables HDDs to present themselves directly over
> Ethernet as Swift object storage:
>
> http://www.seagate.com/**solutions/cloud/data-center-**
> cloud/platforms/?cmpid=**friendly-_-pr-kinetic-us
>
> If the CPUs on these drives have enough oomph for Swift, what about Ceph
> OSDs?
>
> Add in some DHCP option based auto-configure mechanism and a small SLC SSD
> in each drive (like hybrid drives; Kinetic graphics hint at this already:
> http://www.seagate.com/www-**content/ti-dm/_shared/images/**
> figure-3-drive-application-**management-storage-software-**api-732x642.png)
> so we could also eliminate the storage server layer, get smaller failure
> domains, and solve the journalling problem - and ultimately reduce cost and
> complexity.  Basically build a rack with hundreds of hot-plug Ethernet HDD
> ports...
>
> Forgive me, I'm just thinking out-loud...
> __**_
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com