On 7 Mar 2011, at 23:04, Patrick J. LoPresti wrote:

> I need to be able to read 500 megabytes/second, sustained, from a
> single file using a single thread.  I have achieved such speeds and
> higher, reading 200+ gigabytes sequentially, using a fast RAID and XFS
> (i.e., local storage).  I would like to replace that design with
> something more networked and scalable.  But my individual clients
> still require 500+ megabyte/second reads.
> 
> If I tie together a half dozen fast GlusterFS servers with 10GigE,
> will I be able to serve another half dozen 10GigE clients at 500MB/sec
> each?  (Again, assuming single-file, single-thread on each client.
> Also assume each server can read/write its local store at ~1000 MB/sec
> sustained.)

10GigE will give you net throughput of around 1Gbyte/sec, so two clients going 
at 500M/sec on the same partition/interface (and assuming your switch will run 
at wire speed) will saturate it. Maybe you should have a separate 10G interface 
on each server for each client? You might run into trouble with bus bandwidth 
on the servers though.

>From what I understand of gluster, you might get this kind of bandwidth if the 
>target files happen to be on different servers, but I'm not sure how you'd 
>make sure they were on different servers in order to increase capacity, unless 
>you effectively raid-1 your data across all servers. Even then, you'd need 
>some way of round-robining that's consistent across all clients.

I may be wrong here, mainly thinking out loud!

Marcus
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to