Hi,

@Joop:
thanks for your info! :)
To be honest I planned the same here, as iptables will work, but also brings 
some extra CPU load and network latency with it.

@all:
I read in the documentation, the Gluster server pool automatically tells  the 
GlusterFS client which storage server it should use,
So even if I just tell the client one server, the client will know the complete 
pool with all peers, right?

Is there any layer in the Gluster stack which balances the client requests over 
all GlusterFS servers or does it make sense to use DNS RR here?
Thanks in advance for your info!

Best,
Sven.


Sven Knohsalla | System Administration | Netbiscuits

Office +49 631 68036 433 | Fax +49 631 68036 111  |E-Mail 
s.knohsa...@netbiscuits.com | Skype: netbiscuits.admin
Netbiscuits GmbH | Europaallee 10 | 67657 | GERMANY

Von: Joop [mailto:jvdw...@xs4all.nl]
Gesendet: Sonntag, 9. Juni 2013 13:33
An: Sven Knohsalla
Cc: gluster-users@gluster.org
Betreff: Re: [Gluster-users] Separate replication/self-healing and access 
traffic

Hi Sven,




just read the official documentation and googled for separation of storage 
replication /self-healing networking & storage-access network via 
NFS/GlusterFS-client,
but I couldn't find a clear answer.

Is it possible to separate replication & access network with the options


option transport.socket.bind-address

option auth.addr.brick.allow

?

If so, is it possible to run native glusterFS or do I have to use NFS for 
storage access and  first option to allow server-side replication?

Is there any issues I may run into or I didn't pay attention at this point?


I have implemented a kind of split dns solution whereby the managment layer 
resolves to a different network then the storage interface. I'm using oVirt 
with gluster in this way and thing work rather smoothly. Solution of Jeff  
works too but as he says in his blog, using iptables is less transparant and 
using splitdns had the advantage that is will work or not if you forget to add 
a node to the storage dns zone.
oVirt engine sees stor_srv01 as 192.168.1.1 but the storage layer sees it as 
10.1.1.1 for example.

Still got question? go ahead.

Regards,

Joop
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Reply via email to