I sent this yesterday, but it didn't seem to get through.

On 12 May 2010, at 16:29, Burnash, James wrote:

> Volgen for a raid 1 solution creates a config file that does the mirroring on 
> the client side - which I would take as an implicit endorsement from the 
> Gluster team (great team, BTW). However, it seems to me that if the bricks 
> replicated between themselves on our 10Gb storage network, it could save a 
> lot of bandwidth for the clients and conceivably save them CPU cycles an I/O 
> as well.

Unfortunately not. The shared-nothing architecture is what enables gluster (and 
similarly constructed systems like memcache) to scale on an O(1) basis. 
Memcache's consistent hash mechanism is a thing of beauty.

If the clients know where they're supposed to write to (say for 2-way AFR), you 
have a worst case of connecting to 2 gluster servers, even if you have a 
thousand servers. If the client knows nothing (and thus can write anywhere), 
you'd need a million connections to handle the same config as every server 
would have to connect to every other.

You can get away with poor scalability for small systems, but that's not what 
gluster is about. Convenience is often inversely proportional to scalability.

You could avoid the end-client complexity, and keep replication traffic off 
your client network by using something like a pair of servers acting as an NFS 
gateway in front of gluster. That way your apps can connect to a simple NFS 
share, but have the gluster back end hidden behind the gateways but inside your 
10G network. Not sure if there are problems with that, but similar structures 
have been mentioned on here recently.

Caveat: not being a mathematician, I may have this all wrong :)

Marcus
-- 
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK resellers of i...@hand CRM solutions
mar...@synchromedia.co.uk | http://www.synchromedia.co.uk/


_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to