Re: [Gluster-users] XenServer and Glusterfs 3.1

2010-11-10 Thread Shehjar Tikoo
Yes. That was a limitation on 3.1 release and is already fixed in 
mainline. This support allows you to change the nfs port number that 
Gluster NFS uses by default. It'll be available in 3.1.1 but if you'd 
like to test right away, please use 3.1.1qa5 by checking it out from the 
repository:


$ git clone git://git.gluster.com/glusterfs.git
$ cd glusterfs
$ git checkout -b v3.1.1qa5 3.1.1qa5

Then build and install.

To change the nfs port, locate the volume section nfs/server in 
/etc/glusterd/nfs/nfs-server.vol and add the following line:


option nfs.port 2049

Note that this option is not yet available in the gluster CLI, so you'll 
have to manually edit this file and restart the gluster nfs daemon. Be 
careful while using that tool, because on a restart of a volume using 
gluster CLI, your edited volume file will get over-written with the 
default version.



Thanks
-Shehjar

Stefano Baronio wrote:

Hello all,
  I'm new to the list and I'm working on glusterfs since a month right now.
I'm posting a request about how to get XenServer working with Glusterfs.

I have a standard setup of both XenServer and Glusterfs.
I can mount the glusterfs nfs share from the Xen CLI, write in it and mount
it as an ISO library as well.
I just can't mount it for storage purpose.
It seems that XenServer is testing the NFS share directly to port 2049,
without checking with portmapper.

I have tried to make glusterfs listen on port 2049 without any success, so I
have setup a port forwarding on the gluster server.
Lets say:
xen01 - 192.168.14.33
xenfs01 (gluster nfs) - 192.168.14.61

The iptables settings are:
iptables -A PREROUTING -d 192.168.14.61 -p tcp -m tcp --dport 2049 -j DNAT
--to-destination 192.168.14.61:38467
iptables -A FORWARD -d 192.168.14.61 -p tcp -m tcp --dport 38467 -j ACCEPT

Now XenServer can correctly test the gluster nfs share. It creates the
sr-uuid directory in it, but it can't mount it, with the following error:
FAILED: (errno 32) stdout: '', stderr: 'mount:
xenfs01:/xenfs/1ca32487-42fe-376e-194c-17f78afc006c failed, reason given by
server: No such file or directory

Any help appreciated.
Thank you

Stefano





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] ACL with GlusterFS 3.1?

2010-11-10 Thread Shehjar Tikoo

Hi

ACLs are not supported  as yet.

Thanks

Mike Hanby wrote:

Howdy,

Are access control lists (ACL, i.e. setfacl / getfacl) supported in
GlusterFS 3.1?

If yes, beyond mounting the bricks with "defaults,acl" what do I need
to do to enable ACL for both NFS and native Gluster clients?

Google isn't returning anything useful on this topic.

Thanks,

Mike

= Mike Hanby mha...@uab.edu UAB
School of Engineering Information Systems Specialist II IT HPCS /
Research Computing


___ Gluster-users mailing
list Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Creation of volume has been unsuccessful

2010-11-10 Thread Pranith Kumar. Karampuri
hi itzik,
   Could you attach the zipped glusterd logs present in /etc/glusterd/logs 
on all the machines. That will help us figure out what the problem is.

Thanks
Pranith
- Original Message -
From: "itzik bar" 
To: gluster-users@gluster.org
Sent: Wednesday, November 10, 2010 9:50:31 PM
Subject: [Gluster-users] Creation of volume  has been unsuccessful

Hi,
When running the following command on host named gluster1:
#gluster volume create test-volume gluster3:/data1

I get:
Creation of volume test-volume has been unsuccessful

I tried to look for clues in the logs but didn't find one.

I have 4 nodes: gluster1,gluster2,gluster3,gluster4


/etc/glusterfs/glusterd.vol 

volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type tcp   
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume

gluster peer status(from gluster1)
Number of Peers: 3

Hostname: gluster4
Uuid: 7ca19338-5f45-448a-a324-648e990a35de
State: Peer in Cluster (Connected)

Hostname: gluster3
Uuid: 1679a2a2-d3fd-4b9b-aa61-93a94287b565
State: Peer in Cluster (Connected)

Hostname: gluster2
Uuid: 707c894e-6c0d-471e-afc5-09ea0dbf2bbc
State: Peer in Cluster (Connected)

What am I missing?

Thanks for your help,
Dan


  
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Questions about expanding a volume

2010-11-10 Thread John Lao
Hi,

I am currently running glusterfs 3.1 with 3 bricks in distribute mode and I am 
thinking of adding a 4th brick.  How does gluster treat a new brick when it is 
added to an existing volume?  If I do not rebalance the volume will it send 
all/most new data to the new brick or will it still distribute it evenly?

Also, what's the performance impact on the volume when running a rebalance?  We 
have about 5.5TB of data, most files are less than 1 meg.

Thanks,

John Lao
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] XenServer and Glusterfs 3.1

2010-11-10 Thread Stefano Baronio
Hello all,
  I'm new to the list and I'm working on glusterfs since a month right now.
I'm posting a request about how to get XenServer working with Glusterfs.

I have a standard setup of both XenServer and Glusterfs.
I can mount the glusterfs nfs share from the Xen CLI, write in it and mount
it as an ISO library as well.
I just can't mount it for storage purpose.
It seems that XenServer is testing the NFS share directly to port 2049,
without checking with portmapper.

I have tried to make glusterfs listen on port 2049 without any success, so I
have setup a port forwarding on the gluster server.
Lets say:
xen01 - 192.168.14.33
xenfs01 (gluster nfs) - 192.168.14.61

The iptables settings are:
iptables -A PREROUTING -d 192.168.14.61 -p tcp -m tcp --dport 2049 -j DNAT
--to-destination 192.168.14.61:38467
iptables -A FORWARD -d 192.168.14.61 -p tcp -m tcp --dport 38467 -j ACCEPT

Now XenServer can correctly test the gluster nfs share. It creates the
sr-uuid directory in it, but it can't mount it, with the following error:
FAILED: (errno 32) stdout: '', stderr: 'mount:
xenfs01:/xenfs/1ca32487-42fe-376e-194c-17f78afc006c failed, reason given by
server: No such file or directory

Any help appreciated.
Thank you

Stefano
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] ACL with GlusterFS 3.1?

2010-11-10 Thread Mike Hanby
Howdy,

Are access control lists (ACL, i.e. setfacl / getfacl) supported in GlusterFS 
3.1?

If yes, beyond mounting the bricks with "defaults,acl" what do I need to do to 
enable ACL for both NFS and native Gluster clients?

Google isn't returning anything useful on this topic.

Thanks,

Mike

=
Mike Hanby
mha...@uab.edu
UAB School of Engineering
Information Systems Specialist II
IT HPCS / Research Computing


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.1: Multiple networks and Client access

2010-11-10 Thread Udo Waechter
Hi and thanks for the answer.

I do not really know why it does not work. One problem seems to be how the 
server that is used for adding peers is added to the DB. Let me quote:

> > - when I add a peer with its internal name it works, except for the peer 
> > that I add the other peers from.
> > ---hostname2.internal: $ gluster peer probe hostname1.internal
> > ---hostname1.internal $ gluster peer status
> > Number of Peers: 1
> > 
> > Hostname: 10.10.33.142
> > Uuid: a19fc9d3-d00f-4440-b096-c974db1cd8c7
> > State: Peer in Cluster (Connected)
> > 
> > This should be hostname2.internal
> > 
> > When I do gluster peer probe hostname1.internal (on the host 
> > hostname1.internal) I get:
> > 
> > "hostname1.internal is already part of another cluster"
> > so here, ip/name resolution works...
> > 
> > this works in all permutations. The peer from which I do "gluster peer 
> > probe ..." always does not resolve to its internal name, but its ip-adress
> > 
> > As a result from all this, point a) can not succeed, since:
> > 
> > gluster volume create  hostname... hostname... hostname... results in:
> > 
> > "Host hostnameX is not a friend", where hostnameX ist the host where the 
> > volume creation was attempted.
> > 

IF gluster would use the hostname for all peers, then I guess there would be no 
problem at all.

Do you have a bug number so I could track the state myself?

Thanks,
udo.

-- 
---[ Institute of Cognitive Science @ University of Osnabrueck
---[ Albrechtstrasse 28, D-49076 Osnabrueck, 969-3362
---[ Documentation: https://doc.ikw.uni-osnabrueck.de



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] filling gluster cluster with large file doesn't crash the system?!

2010-11-10 Thread Matt Hodson

Craig,
inline...

On Nov 10, 2010, at 7:17 AM, Craig Carl wrote:


Matt -
   A couple of questions -

What is your volume config? (`gluster volume info all`)


gluster> volume info all

Volume Name: gs-test
Type: Distribute
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 172.16.1.76:/exp1
Brick2: 172.16.2.117:/exp2


What is the hardware config for each storage server?


brick 1 = 141GB
brick 2 = 143GB


What command did you run to create the test data?


#perl -e 'print rand while 1' > y.out &


What process is still writing to the file?


same one as above.



Thanks,
Craig

-->
Craig Carl
Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Gtalk - craig.c...@gmail.com


From: "Matt Hodson" 
To: gluster-users@gluster.org
Cc: "Jeff Kozlowski" 
Sent: Tuesday, November 9, 2010 10:46:04 AM
Subject: Re: [Gluster-users] filling gluster cluster with large file  
doesn'tcrash the system?!


I should also note that on this non-production test rig the block size
on both bricks is 1KB (1024) so the theoretical file size limit is
16GB.  so how then did i get a file of 200GB?
-matt

On Nov 9, 2010, at 10:34 AM, Matt Hodson wrote:

> craig et al,
>
> I have a 2 brick distributed 283GB gluster cluster on CentoOS 5. we
> nfs mounted the cluster from a 3rd machine and wrote random junk to
> a file. i watched the file grow to 200GB on the cluster when it
> appeared to stop. however the machine writing to the file still
> lists the file as growing. it's now at over 320GB. what's going on?
>
> -matt
>
> ---
> Matt Hodson
> Scientific Customer Support, Geospiza
> (206) 633-4403, Ext. 111
> http://www.geospiza.com
>
>
>
>


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Question about redundancy over ethernet

2010-11-10 Thread ron gage
Greetings: 

I am considering deploying Gluster using NFS primarily as the protocol of 
choice. 

In order to achieve failover redundancy, does Gluster utilize a virtual IP 
address like a load balancer would? Let's say I have a 4 node Gluster cluster 
set up, my client connects to node A. If node A goes off network for whatever 
reason, what happens to my client? Are sessions maintained so the client(s) 
don't register a server disconnect? In other words: with NFS (or even CIFS), 
how transparent is the failover process. Finally, what is the typical failover 
cutover timing like? Is it sub-second? 

Ron 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Creation of volume has been unsuccessful

2010-11-10 Thread itzik bar
Hi,
When running the following command on host named gluster1:
#gluster volume create test-volume gluster3:/data1

I get:
Creation of volume test-volume has been unsuccessful

I tried to look for clues in the logs but didn't find one.

I have 4 nodes: gluster1,gluster2,gluster3,gluster4


/etc/glusterfs/glusterd.vol 

volume management
type mgmt/glusterd
option working-directory /etc/glusterd
option transport-type tcp   
option transport.socket.keepalive-time 10
option transport.socket.keepalive-interval 2
end-volume

gluster peer status(from gluster1)
Number of Peers: 3

Hostname: gluster4
Uuid: 7ca19338-5f45-448a-a324-648e990a35de
State: Peer in Cluster (Connected)

Hostname: gluster3
Uuid: 1679a2a2-d3fd-4b9b-aa61-93a94287b565
State: Peer in Cluster (Connected)

Hostname: gluster2
Uuid: 707c894e-6c0d-471e-afc5-09ea0dbf2bbc
State: Peer in Cluster (Connected)

What am I missing?

Thanks for your help,
Dan


  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] filling gluster cluster with large file doesn't crash the system?!

2010-11-10 Thread Craig Carl
Matt - 
A couple of questions - 

What is your volume config? (`gluster volume info all`) 
What is the hardware config for each storage server? 
What command did you run to create the test data? 
What process is still writing to the file? 




Thanks, 
Craig 

--> 
Craig Carl 



Gluster, Inc. 
Cell - (408) 829-9953 (California, USA) 
Gtalk - craig.c...@gmail.com 


From: "Matt Hodson"  
To: gluster-users@gluster.org 
Cc: "Jeff Kozlowski"  
Sent: Tuesday, November 9, 2010 10:46:04 AM 
Subject: Re: [Gluster-users] filling gluster cluster with large file doesn't 
crash the system?! 

I should also note that on this non-production test rig the block size 
on both bricks is 1KB (1024) so the theoretical file size limit is 
16GB. so how then did i get a file of 200GB? 
-matt 

On Nov 9, 2010, at 10:34 AM, Matt Hodson wrote: 

> craig et al, 
> 
> I have a 2 brick distributed 283GB gluster cluster on CentoOS 5. we 
> nfs mounted the cluster from a 3rd machine and wrote random junk to 
> a file. i watched the file grow to 200GB on the cluster when it 
> appeared to stop. however the machine writing to the file still 
> lists the file as growing. it's now at over 320GB. what's going on? 
> 
> -matt 
> 
> --- 
> Matt Hodson 
> Scientific Customer Support, Geospiza 
> (206) 633-4403, Ext. 111 
> http://www.geospiza.com 
> 
> 
> 
> 


___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.1: Multiple networks and Client access

2010-11-10 Thread Craig Carl
Udo - 
We are tracking these sort of client side configuration issues as bug #2014 , 
however in this particular case the hosts file/DNS workaround appears to be 
working well at several sites, is there any reason that process won't work for 
you? 




Thanks, 
Craig 

--> 
Craig Carl 



Gluster, Inc. 
Cell - (408) 829-9953 (California, USA) 
Gtalk - craig.c...@gmail.com 


From: "Udo Waechter"  
To: gluster-users@gluster.org 
Cc: "Craig Carl"  
Sent: Tuesday, November 9, 2010 11:33:56 PM 
Subject: Re: [Gluster-users] 3.1: Multiple networks and Client access 

Hi, any news on this? 
Thanks for the effort, 
udo. 

On 01.11.2010, at 15:02, Udo Waechter wrote: 

> Hi and thanks for the answer. 
> I think there is a bug/problem with gluster. 
> First some pre-knowledge 
> 
> - our internal network does not resolve vie DNS. 
> - the internal hosts are only resolvable via /etc/hosts. 
> 
> Now, my idea would be this: 
> 
> 1. on the cluster nodes, use the internal (bonding) interfaces for 
> communication 
> 2. the rest of the network should use the external interfaces to communicate 
> with the storage cloud. 
> 
> To achieve this, the idea was now to: 
> 
> a) create the volume with the internal names 
> $ gluster volume create ... hostname1.internal:/exp hostname2.internal:/exp 
> hostname3.internal:/exp 
> 
> b) On all the (external) nodes, that should reach the gluster-cluster, add 
> hostnam1...X.internal to /etc/hosts resolving to their external ip-address. 
> 
> 
> Now to all the problems: 
> 
> Regarding point 1 above: 
> - when I add a peer with its internal name it works, except for the peer that 
> I add the other peers from. 
> ---hostname2.internal: $ gluster peer probe hostname1.internal 
> ---hostname1.internal $ gluster peer status 
> Number of Peers: 1 
> 
> Hostname: 10.10.33.142 
> Uuid: a19fc9d3-d00f-4440-b096-c974db1cd8c7 
> State: Peer in Cluster (Connected) 
> 
> This should be hostname2.internal 
> 
> When I do gluster peer probe hostname1.internal (on the host 
> hostname1.internal) I get: 
> 
> "hostname1.internal is already part of another cluster" 
> so here, ip/name resolution works... 
> 
> this works in all permutations. The peer from which I do "gluster peer probe 
> ..." always does not resolve to its internal name, but its ip-adress 
> 
> As a result from all this, point a) can not succeed, since: 
> 
> gluster volume create  hostname... hostname... hostname... results in: 
> 
> "Host hostnameX is not a friend", where hostnameX ist the host where the 
> volume creation was attempted. 
> 
> 
> I have tried and installed pdnsd for the internal network, but this does not 
> solve the problems either. 
> 
> As a last resort, I edited /etc/glusterd/peers/* and replaced the ip-adresses 
> by hand. Now, "gluster peer status" gives me the names instead of the 
> ip-adress. 
> but "volume create" still tells me about the host (where I create the volume 
> from) not being a friend. 
> 
> Any help, solution is highly apreciated. 
> Thanks, 
> udo. 
> 
> On 18.10.2010, at 07:10, Craig Carl wrote: 
> 
>> Udo - 
>> With 3.1 when you mount/create/change a volume those changes are propagated 
>> via RPC to all of the other Gluster servers and clients. When you created 
>> the volume using 10.10.x.x IP addresses those IPs where what got sent to the 
>> client. In previous versions you could have just edited the client side 
>> configuration file and change or added the 192. addresses but not in this 
>> version, due to DVM. There should be a way to make multiple networks work so 
>> I will file a bug. 
>> In the meantime I think I have a workaround. If you use names instead of IP 
>> addresses and then make sure DNS or host files are setup properly is should 
>> work as Gluster does export via all interfaces. For example if the servers 
>> have these IPs - 
>> 
>> server1 - 10.10.1.1, 192.168.1.1 
>> server2 - 10.10.1.2, 192.168.1.2 
>> server3 - 10.10.1.3, 192.168.1.3 
>> 
>> #gluster volume create test-ext stripe 3 server1:/ext server2:/ext 
>> server3:/ext 
>> 
>> You would just need to make sure that hosts on the 10.10.x.x resolve the 
>> servername to its 10. IP, and clients on the 192.x resolve to the 192 IP. 
>> Should be a simple change to the /etc/host files. 
>> 
>> Please let me know if this works so I can include that information in my bug 
>> report. 
>> 
>> Thanks, 
>> 
>> Craig 
>> 
>> -- 
>> Craig Carl 
>> Senior Systems Engineer; Gluster, Inc. 
>> Cell - (408) 829-9953 (California, USA) 
>> Office - (408) 770-1884 
>> Gtalk - craig.c...@gmail.com 
>> Twitter - @gluster 
>> Installing Gluster Storage Platform, the movie! 
>> http://rackerhacker.com/2010/08/11/one-month-with-glusterfs-in-production/ 
>> 
>> 
>> From: "Udo Waechter"  
>> To: gluster-users@gluster.org 
>> Sent: Sunday, October 17, 2010 12:57:18 AM 
>> Subject: Re: [Gluster-users] 3.1: Multiple networks and Client access 
>> 
>> Hi, 
>> although I totally forgot about the firewall, this problem is not related t