[Gluster-users] eager locking

2012-09-13 Thread anthony garnier

Hi everyone,

I have seen in (GlusterFS 3.3)  the gluster cli a eager locking option 
(cluster.eager-lock) and there is no reference to this option in the 
documentation. 
Is it in relation with this post 
http://hekafs.org/index.php/2012/03/glusterfs-algorithms-replication-future/

?
Enabling this option seems to boost write performance but is it without danger ?

Thx

 Anthony Garnier
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo rep fail

2012-08-02 Thread anthony garnier

Hi Vijay,

Thx for your help, I tried with root and it worked !

Many thanks,

Anthony

 Date: Thu, 2 Aug 2012 02:28:27 -0400
 From: vkop...@redhat.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Geo rep fail
 
 Hi anthony, 
 
What I understood from your invocation of geo-rep session is,
 you are trying to start geo-rep with slave as a normal-user. 
 To successfully start geo-rep session , the slave need to be as a super user.
 Otherwise if you really want to have slave as a normal-user , you should 
 set-up geo-rep through Mount-broker, the details of which you can get here,
 
 http://docs.redhat.com/docs/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/chap-User_Guide-Geo_Rep-Preparation-Settingup_Slave.html
 
 Thanks,
 Vijaykumar 
 
 - Original Message -
 From: anthony garnier sokar6...@hotmail.com
 To: vkop...@redhat.com
 Cc: gluster-users@gluster.org
 Sent: Wednesday, August 1, 2012 6:28:37 PM
 Subject: Re: [Gluster-users] Geo rep fail
 
 
 
 Hi Vijay, 
 
 Some complementary info : 
 
 * SLES 11.2 
 * 3.0.26-0.7-xen 
 * glusterfs 3.3.0 built on Jul 16 2012 14:28:16 
 * Python 2.6.8 
 * rsync version 3.0.4 
 * OpenSSH_4.3p2, OpenSSL 0.9.8a 11 Oct 2005 
 
 
 * 
 ssh command used : ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no 
 -i /var/lib/glusterd/geo-replication/secret.pem = key of user sshux 
 
 I also changed 1 line in gconf.py because I was aving diffiulties with the 
 control master option and -S option 
 
 # cls.ssh_ctl_args = [-oControlMaster=auto, -S, os.path.join(ctld, 
 gsycnd-ssh-%r@%h:%p)] 
 cls.ssh_ctl_args = [-oControlMaster=no] 
 
 Gluster cmd : 
 
 # gluster volume geo-replication test ssh://sshux@yval1020:/users/geo-rep 
 start 
 
 
 Thx for your help. 
 
 Anthony 
 
 
 
  Date: Tue, 31 Jul 2012 08:18:52 -0400 
  From: vkop...@redhat.com 
  To: sokar6...@hotmail.com 
  CC: gluster-users@gluster.org 
  Subject: Re: [Gluster-users] Geo rep fail 
  
  Thanks anthony, I'll try to reproduce that. 
  
  -Vijaykumar 
  
  - Original Message - 
  From: anthony garnier sokar6...@hotmail.com 
  To: vkop...@redhat.com 
  Cc: gluster-users@gluster.org 
  Sent: Tuesday, July 31, 2012 5:13:13 PM 
  Subject: Re: [Gluster-users] Geo rep fail 
  
  
  
  Hi Vijay, 
  
  I used the tarball here : 
  http://download.gluster.org/pub/gluster/glusterfs/LATEST/ 
  
  
  
  
   Date: Tue, 31 Jul 2012 07:39:51 -0400 
   From: vkop...@redhat.com 
   To: sokar6...@hotmail.com 
   CC: gluster-users@gluster.org 
   Subject: Re: [Gluster-users] Geo rep fail 
   
   Hi anthony, 
   
   By Glusterfs-3.3 version, you mean this rpm 
   http://bits.gluster.com/pub/gluster/glusterfs/3.3.0/. 
   or If you are working with git repo, can you give me branch and Head. 
   
   -Vijaykumar 
   
   - Original Message - 
   From: anthony garnier sokar6...@hotmail.com 
   To: gluster-users@gluster.org 
   Sent: Tuesday, July 31, 2012 2:47:40 PM 
   Subject: [Gluster-users] Geo rep fail 
   
   
   
   Hello everyone, 
   
   I'm using Glusterfs 3.3 and I have some difficulties to setup 
   geo-replication over ssh. 
   
   # gluster volume geo-replication test status 
   MASTER SLAVE STATUS 
   

   test ssh://sshux@yval1020:/users/geo-rep faulty 
   test file:///users/geo-rep OK 
   
   As you can see, the one in a local folder works fine. 
   
   This is my config : 
   
   Volume Name: test 
   Type: Replicate 
   Volume ID: 2f0b0eff-6166-4601-8667-6530561eea1c 
   Status: Started 
   Number of Bricks: 1 x 2 = 2 
   Transport-type: tcp 
   Bricks: 
   Brick1: yval1010:/users/exp 
   Brick2: yval1020:/users/exp 
   Options Reconfigured: 
   geo-replication.indexing: on 
   cluster.eager-lock: on 
   performance.cache-refresh-timeout: 60 
   network.ping-timeout: 10 
   performance.cache-size: 512MB 
   performance.write-behind-window-size: 256MB 
   features.quota-timeout: 30 
   features.limit-usage: /:20GB,/kernel:5GB,/toto:2GB,/troll:1GB 
   features.quota: on 
   nfs.port: 2049 
   
   
   This is the log : 
   
   [2012-07-31 11:10:38.711314] I [monitor(monitor):81:monitor] Monitor: 
   starting gsyncd worker 
   [2012-07-31 11:10:38.844959] I [gsyncd:354:main_i] top: syncing: 
   gluster://localhost:test - ssh://sshux@yval1020:/users/geo-rep 
   [2012-07-31 11:10:44.526469] I [master:284:crawl] GMaster: new master is 
   2f0b0eff-6166-4601-8667-6530561eea1c 
   [2012-07-31 11:10:44.527038] I [master:288:crawl] GMaster: primary master 
   with volume id 2f0b0eff-6166-4601-8667-6530561eea1c ... 
   [2012-07-31 11:10:44.644319] E [repce:188:__call__] RepceClient: call 
   10810:140268954724096:1343725844.53 (xtime) failed on peer with OSError 
   [2012-07-31 11:10:44.644629] E [syncdutils:184:log_raise_exception] 
   top: FAIL: 
   Traceback (most recent call last): 
   File /soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon

[Gluster-users] Geo rep fail

2012-07-31 Thread anthony garnier

Hello everyone,

I'm using Glusterfs 3.3 and I have some difficulties to setup geo-replication 
over ssh.

 # gluster volume geo-replication test status
MASTER   SLAVE  STATUS

test ssh://sshux@yval1020:/users/geo-repfaulty
test file:///users/geo-rep  OK

As you can see, the one in a local folder works  fine.

This is my config : 

Volume Name: test
Type: Replicate
Volume ID: 2f0b0eff-6166-4601-8667-6530561eea1c
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: yval1010:/users/exp
Brick2: yval1020:/users/exp
Options Reconfigured:
geo-replication.indexing: on
cluster.eager-lock: on
performance.cache-refresh-timeout: 60
network.ping-timeout: 10
performance.cache-size: 512MB
performance.write-behind-window-size: 256MB
features.quota-timeout: 30
features.limit-usage: /:20GB,/kernel:5GB,/toto:2GB,/troll:1GB
features.quota: on
nfs.port: 2049


This is the log : 

[2012-07-31 11:10:38.711314] I [monitor(monitor):81:monitor] Monitor: starting 
gsyncd worker
[2012-07-31 11:10:38.844959] I [gsyncd:354:main_i] top: syncing: 
gluster://localhost:test - ssh://sshux@yval1020:/users/geo-rep
[2012-07-31 11:10:44.526469] I [master:284:crawl] GMaster: new master is 
2f0b0eff-6166-4601-8667-6530561eea1c
[2012-07-31 11:10:44.527038] I [master:288:crawl] GMaster: primary master with 
volume id 2f0b0eff-6166-4601-8667-6530561eea1c ...
[2012-07-31 11:10:44.644319] E [repce:188:__call__] RepceClient: call 
10810:140268954724096:1343725844.53 (xtime) failed on peer with OSError
[2012-07-31 11:10:44.644629] E [syncdutils:184:log_raise_exception] top: FAIL:
Traceback (most recent call last):
  File /soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py, line 
115, in main
main_i()
  File /soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py, line 
365, in main_i
local.service_loop(*[r for r in [remote] if r])
  File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/resource.py, line 
756, in service_loop
GMaster(self, args[0]).crawl_loop()
  File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py, line 
143, in crawl_loop
self.crawl()
  File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py, line 
308, in crawl
xtr0 = self.xtime(path, self.slave)
  File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py, line 
74, in xtime
xt = rsc.server.xtime(path, self.uuid)
  File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py, line 
204, in __call__
return self.ins(self.meth, *a)
  File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py, line 
189, in __call__
raise res
OSError: [Errno 95] Operation not supported


Apparently there is some errors with xtime and yet I have extended attribute 
activated. 
Any help will be gladly appreciated.

Anthony








  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Geo rep fail

2012-07-31 Thread anthony garnier

Hi Vijay,

I used the tarball here : 
http://download.gluster.org/pub/gluster/glusterfs/LATEST/


 Date: Tue, 31 Jul 2012 07:39:51 -0400
 From: vkop...@redhat.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Geo rep fail
 
 Hi anthony, 
 
 By Glusterfs-3.3 version, you mean this rpm 
 http://bits.gluster.com/pub/gluster/glusterfs/3.3.0/.
 or If you are working with git repo, can you give me branch and Head. 
 
 -Vijaykumar
 
 - Original Message -
 From: anthony garnier sokar6...@hotmail.com
 To: gluster-users@gluster.org
 Sent: Tuesday, July 31, 2012 2:47:40 PM
 Subject: [Gluster-users] Geo rep fail
 
 
 
 Hello everyone, 
 
 I'm using Glusterfs 3.3 and I have some difficulties to setup geo-replication 
 over ssh. 
 
 # gluster volume geo-replication test status 
 MASTER SLAVE STATUS 
 
  
 test ssh://sshux@yval1020:/users/geo-rep faulty 
 test file:///users/geo-rep OK 
 
 As you can see, the one in a local folder works fine. 
 
 This is my config : 
 
 Volume Name: test 
 Type: Replicate 
 Volume ID: 2f0b0eff-6166-4601-8667-6530561eea1c 
 Status: Started 
 Number of Bricks: 1 x 2 = 2 
 Transport-type: tcp 
 Bricks: 
 Brick1: yval1010:/users/exp 
 Brick2: yval1020:/users/exp 
 Options Reconfigured: 
 geo-replication.indexing: on 
 cluster.eager-lock: on 
 performance.cache-refresh-timeout: 60 
 network.ping-timeout: 10 
 performance.cache-size: 512MB 
 performance.write-behind-window-size: 256MB 
 features.quota-timeout: 30 
 features.limit-usage: /:20GB,/kernel:5GB,/toto:2GB,/troll:1GB 
 features.quota: on 
 nfs.port: 2049 
 
 
 This is the log : 
 
 [2012-07-31 11:10:38.711314] I [monitor(monitor):81:monitor] Monitor: 
 starting gsyncd worker 
 [2012-07-31 11:10:38.844959] I [gsyncd:354:main_i] top: syncing: 
 gluster://localhost:test - ssh://sshux@yval1020:/users/geo-rep 
 [2012-07-31 11:10:44.526469] I [master:284:crawl] GMaster: new master is 
 2f0b0eff-6166-4601-8667-6530561eea1c 
 [2012-07-31 11:10:44.527038] I [master:288:crawl] GMaster: primary master 
 with volume id 2f0b0eff-6166-4601-8667-6530561eea1c ... 
 [2012-07-31 11:10:44.644319] E [repce:188:__call__] RepceClient: call 
 10810:140268954724096:1343725844.53 (xtime) failed on peer with OSError 
 [2012-07-31 11:10:44.644629] E [syncdutils:184:log_raise_exception] top: 
 FAIL: 
 Traceback (most recent call last): 
 File /soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py, line 
 115, in main 
 main_i() 
 File /soft/GLUSTERFS//libexec/glusterfs/python/syncdaemon/gsyncd.py, line 
 365, in main_i 
 local.service_loop(*[r for r in [remote] if r]) 
 File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/resource.py, line 
 756, in service_loop 
 GMaster(self, args[0]).crawl_loop() 
 File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py, line 
 143, in crawl_loop 
 self.crawl() 
 File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py, line 
 308, in crawl 
 xtr0 = self.xtime(path, self.slave) 
 File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/master.py, line 
 74, in xtime 
 xt = rsc.server.xtime(path, self.uuid) 
 File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py, line 
 204, in __call__ 
 return self.ins(self.meth, *a) 
 File /soft/GLUSTERFS/libexec/glusterfs/python/syncdaemon/repce.py, line 
 189, in __call__ 
 raise res 
 OSError: [Errno 95] Operation not supported 
 
 
 Apparently there is some errors with xtime and yet I have extended attribute 
 activated. 
 Any help will be gladly appreciated. 
 
 Anthony 
 
 
 
 
 
 
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS subdirectory Solaris client

2012-07-19 Thread anthony garnier

Hi Shishir,

I have reconfigured the port to 2049 so there is no need to specify it. 
Moreover the mount of the volume works fine, it's only the mount of the 
subdirectory that doesn't work.

Thx and Regards,

Anthony


 Date: Wed, 18 Jul 2012 22:54:28 -0400
 From: sgo...@redhat.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org
 Subject: Re: [Gluster-users] NFS subdirectory Solaris client
 
 Hi Anthony,
 
 Please also specify this option port=38467, and try mounting it.
 
 With regards,
 Shishir
 
 - Original Message -
 From: anthony garnier sokar6...@hotmail.com
 To: gluster-users@gluster.org
 Sent: Wednesday, July 18, 2012 3:21:36 PM
 Subject: [Gluster-users] NFS subdirectory Solaris client
 
 
 
 Hi everyone, 
 
 I still have problem to mount subdirectory in NFS on Solaris client : 
 
 # mount -o proto=tcp,vers=3 nfs://yval1010:/test/test2 /users/glusterfs_mnt 
 nfs mount: yval1010: : RPC: Program not registered 
 nfs mount: retrying: /users/glusterfs_mnt 
 
 
 [2012-07-18 11:43:43.484994] E [nfs3.c:305:__nfs3_get_volume_id] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
 [0x7f5418ea4e15] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
  [0x7f5418e9b908] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
  [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
 [2012-07-18 11:43:43.491088] E [nfs3.c:305:__nfs3_get_volume_id] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
 [0x7f5418ea4e15] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
  [0x7f5418e9b908] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
  [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
 [2012-07-18 11:43:48.494268] E [nfs3.c:305:__nfs3_get_volume_id] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
 [0x7f5418ea4e15] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
  [0x7f5418e9b908] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
  [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
 [2012-07-18 11:43:48.992370] W [socket.c:195:__socket_rwv] 
 0-socket.nfs-server: readv failed (Connection reset by peer) 
 [2012-07-18 11:43:57.422070] W [socket.c:195:__socket_rwv] 
 0-socket.nfs-server: readv failed (Connection reset by peer) 
 [2012-07-18 11:43:58.498666] E [nfs3.c:305:__nfs3_get_volume_id] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
 [0x7f5418ea4e15] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
  [0x7f5418e9b908] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
  [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
 
 
 Can someone confirm that ? 
 
 Regards, 
 
 Anthony 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS subdirectory Solaris client

2012-07-19 Thread anthony garnier

Hi Rajesh ,
My mistake, I didn't specify that I was in 3.3. 
I didn't see any particular comment in the Admin doc about subdirectory export.
Are you referring to this option: nfs.export-dir, but by default every 
subdirectory are exported.

Regards,

Anthony


 Date: Thu, 19 Jul 2012 04:42:20 -0400
 From: raj...@redhat.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org; sgo...@redhat.com
 Subject: Re: [Gluster-users] NFS subdirectory Solaris client
 
 Hi Anthony,
sub directory mount is not possible with GlusterNfs in 3.2.x version.
you can get the 3.3 gNfs to work for subdir mount on solaris, though it
requires some oblique steps to get it working. The steps are provided in
the documentation (Admin guide i think) for 3.3.
 
 Regards, 
 Rajesh Amaravathi, 
 Software Engineer, GlusterFS 
 RedHat Inc. 
 
 - Original Message -
 From: anthony garnier sokar6...@hotmail.com
 To: sgo...@redhat.com
 Cc: gluster-users@gluster.org
 Sent: Thursday, July 19, 2012 12:50:41 PM
 Subject: Re: [Gluster-users] NFS subdirectory Solaris client
 
 
 
 Hi Shishir, 
 
 I have reconfigured the port to 2049 so there is no need to specify it. 
 Moreover the mount of the volume works fine, it's only the mount of the 
 subdirectory that doesn't work. 
 
 Thx and Regards, 
 
 Anthony 
 
 
 
 
  Date: Wed, 18 Jul 2012 22:54:28 -0400 
  From: sgo...@redhat.com 
  To: sokar6...@hotmail.com 
  CC: gluster-users@gluster.org 
  Subject: Re: [Gluster-users] NFS subdirectory Solaris client 
  
  Hi Anthony, 
  
  Please also specify this option port=38467, and try mounting it. 
  
  With regards, 
  Shishir 
  
  - Original Message - 
  From: anthony garnier sokar6...@hotmail.com 
  To: gluster-users@gluster.org 
  Sent: Wednesday, July 18, 2012 3:21:36 PM 
  Subject: [Gluster-users] NFS subdirectory Solaris client 
  
  
  
  Hi everyone, 
  
  I still have problem to mount subdirectory in NFS on Solaris client : 
  
  # mount -o proto=tcp,vers=3 nfs://yval1010:/test/test2 /users/glusterfs_mnt 
  nfs mount: yval1010: : RPC: Program not registered 
  nfs mount: retrying: /users/glusterfs_mnt 
  
  
  [2012-07-18 11:43:43.484994] E [nfs3.c:305:__nfs3_get_volume_id] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
  [0x7f5418ea4e15] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
   [0x7f5418e9b908] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
   [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
  [2012-07-18 11:43:43.491088] E [nfs3.c:305:__nfs3_get_volume_id] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
  [0x7f5418ea4e15] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
   [0x7f5418e9b908] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
   [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
  [2012-07-18 11:43:48.494268] E [nfs3.c:305:__nfs3_get_volume_id] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
  [0x7f5418ea4e15] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
   [0x7f5418e9b908] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
   [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
  [2012-07-18 11:43:48.992370] W [socket.c:195:__socket_rwv] 
  0-socket.nfs-server: readv failed (Connection reset by peer) 
  [2012-07-18 11:43:57.422070] W [socket.c:195:__socket_rwv] 
  0-socket.nfs-server: readv failed (Connection reset by peer) 
  [2012-07-18 11:43:58.498666] E [nfs3.c:305:__nfs3_get_volume_id] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
  [0x7f5418ea4e15] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
   [0x7f5418e9b908] 
  (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
   [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
  
  
  Can someone confirm that ? 
  
  Regards, 
  
  Anthony 
  
  ___ 
  Gluster-users mailing list 
  Gluster-users@gluster.org 
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS subdirectory Solaris client

2012-07-19 Thread anthony garnier

Hi Rajesh,

Thank you for the workaround, it works fine.
Can we expect a patch in the future release to avoid this procedure ?

Regards,

Anthony

 Date: Thu, 19 Jul 2012 05:30:24 -0400
 From: raj...@redhat.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org; sgo...@redhat.com
 Subject: Re: [Gluster-users] NFS subdirectory Solaris client
 
 fwding from krishna's mail, need to update the docs with this info:
 
 Here are the steps:
 on linux client machine:
 * make sure you have the directory you want to export is already created, if 
 not create it.
   if /mnt/nfs is the mount point do this:
   mkdir /mnt/nfs/subdir
 * on storage node do:
   gluster volume set volname nfs.export-dir /subdir
   gluster volume set volname nfs.mount-udp on
 * do a showmount -e storage-node-ip to see that subdir is exported too.
 * on a LINUX client do this:
   mount -o proto=tcp storage-node-ip:/volname/subdir /mnt/nfs
   i.e you are mounting the subdir.
 * now on SOLARIS client do this:
   mount nfs://storage-node-ip:/volname/subdir /mnt/nfs
   you should be able to access the exported subdir on Solaris machine. note 
 that you have to mount the subdir on a linux machine first with proto=tcp 
 before trying to mount on solaris machine.
 
 Regards, 
 Rajesh Amaravathi, 
 Software Engineer, GlusterFS 
 RedHat Inc. 
 
 - Original Message -
 From: anthony garnier sokar6...@hotmail.com
 To: raj...@redhat.com
 Cc: gluster-users@gluster.org, sgo...@redhat.com
 Sent: Thursday, July 19, 2012 2:19:24 PM
 Subject: RE: [Gluster-users] NFS subdirectory Solaris client
 
 
 Hi Rajesh , 
 My mistake, I didn't specify that I was in 3.3. 
 I didn't see any particular comment in the Admin doc about subdirectory 
 export. 
 Are you referring to this option: nfs.export-dir, but by default every 
 subdirectory are exported. 
 
 Regards, 
 
 Anthony 
 
 
 
 
  Date: Thu, 19 Jul 2012 04:42:20 -0400 
  From: raj...@redhat.com 
  To: sokar6...@hotmail.com 
  CC: gluster-users@gluster.org; sgo...@redhat.com 
  Subject: Re: [Gluster-users] NFS subdirectory Solaris client 
  
  Hi Anthony, 
  sub directory mount is not possible with GlusterNfs in 3.2.x version. 
  you can get the 3.3 gNfs to work for subdir mount on solaris, though it 
  requires some oblique steps to get it working. The steps are provided in 
  the documentation (Admin guide i think) for 3.3. 
  
  Regards, 
  Rajesh Amaravathi, 
  Software Engineer, GlusterFS 
  RedHat Inc. 
  
  - Original Message - 
  From: anthony garnier sokar6...@hotmail.com 
  To: sgo...@redhat.com 
  Cc: gluster-users@gluster.org 
  Sent: Thursday, July 19, 2012 12:50:41 PM 
  Subject: Re: [Gluster-users] NFS subdirectory Solaris client 
  
  
  
  Hi Shishir, 
  
  I have reconfigured the port to 2049 so there is no need to specify it. 
  Moreover the mount of the volume works fine, it's only the mount of the 
  subdirectory that doesn't work. 
  
  Thx and Regards, 
  
  Anthony 
  
  
  
  
   Date: Wed, 18 Jul 2012 22:54:28 -0400 
   From: sgo...@redhat.com 
   To: sokar6...@hotmail.com 
   CC: gluster-users@gluster.org 
   Subject: Re: [Gluster-users] NFS subdirectory Solaris client 
   
   Hi Anthony, 
   
   Please also specify this option port=38467, and try mounting it. 
   
   With regards, 
   Shishir 
   
   - Original Message - 
   From: anthony garnier sokar6...@hotmail.com 
   To: gluster-users@gluster.org 
   Sent: Wednesday, July 18, 2012 3:21:36 PM 
   Subject: [Gluster-users] NFS subdirectory Solaris client 
   
   
   
   Hi everyone, 
   
   I still have problem to mount subdirectory in NFS on Solaris client : 
   
   # mount -o proto=tcp,vers=3 nfs://yval1010:/test/test2 
   /users/glusterfs_mnt 
   nfs mount: yval1010: : RPC: Program not registered 
   nfs mount: retrying: /users/glusterfs_mnt 
   
   
   [2012-07-18 11:43:43.484994] E [nfs3.c:305:__nfs3_get_volume_id] 
   (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125)
[0x7f5418ea4e15] 
   (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
[0x7f5418e9b908] 
   (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
[0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
   [2012-07-18 11:43:43.491088] E [nfs3.c:305:__nfs3_get_volume_id] 
   (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125)
[0x7f5418ea4e15] 
   (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
[0x7f5418e9b908] 
   (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
[0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
   [2012-07-18 11:43:48.494268] E [nfs3.c:305:__nfs3_get_volume_id] 
   (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125)
[0x7f5418ea4e15] 
   (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
[0x7f5418e9b908] 
   (--/usr/local/lib//glusterfs/3.3.0/xlator

[Gluster-users] mv not atomic in glusterfs

2012-05-25 Thread anthony garnier

Hi all,
I'm using a distributed-replicated volume and it seems that mv  is not atomic 
with a volume mounted using GlusterFS ( but works well with NFS)

Volume Name: test
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ylal3540:/users3/test
Brick2: ylal3520:/users3/test
Brick3: ylal3530:/users3/test
Brick4: ylal3510:/users3/test
Options Reconfigured:
nfs.port: 2049
performance.cache-size: 6GB
performance.cache-refresh-timeout: 10
performance.cache-min-file-size: 1kB
performance.cache-max-file-size: 4GB
network.ping-timeout: 10
features.quota: off
features.quota-timeout: 60


mount -t glusterfs ylal3530:/test /users/glusterfs_mnt/

# while :; do echo test  /users/glusterfs_mnt/tmp/atomic1 ;mv 
/users/glusterfs_mnt/tmp/atomic1 /users/glusterfs_mnt/tmp/atomic2;done

# while :; do cat /users/glusterfs_mnt/tmp/atomic2 1/dev/null;done
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory
cat: /users/glusterfs_mnt/tmp/atomic2: No such file or directory







  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] FD stay open

2012-05-04 Thread anthony garnier

Amar,
Thx for answering us. Is there any other solutions with 3.2.X to avoid this 
issue except restarting gluster daemon ?
And also, when 3.3.x will be in a stable version ?

Thx

Anthony








Message: 1
Date: Mon, 30 Apr 2012 21:44:50 +0530
From: Amar Tumballi ama...@redhat.com
Subject: Re: [Gluster-users] Gluster-users Digest, Vol 48, Issue 43
To: Gerald Brandt g...@majentis.com
Cc: gluster-users@gluster.org, anthony garnier sokar6...@hotmail.com
Message-ID: 4f9eba7a.5080...@redhat.com
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
 

 You are having the exact same problem I am.  So far, no response from anyone 
 at Gluster/RedHat as to what is happening or if this is a known issue.

 
Hi Gerald/Anthony,
 
This issue is not easy to handle with 3.2.x version of gluster's NFS 
server. This issue is being addressed with 3.3.x branch (ie current 
master branch). Please try 3.3.0beta3+ or qa36+ for testing the behavior.
 
This happens because NFS process works on FH (file handles), and for 
that we needed to keep a fd-ref till NFS client has reference to filehandle.
 
With 3.3.0, we changed some of the internal way how we handle NFS FHs, 
so this problem should not happen in 3.3.0 release.
 
 
Regards,
Amar ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] remote operation failed: No space left on device

2012-04-27 Thread anthony garnier




  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] remote operation failed: No space left on device

2012-04-27 Thread anthony garnier

After some research, I found what's going on: 
seems that the glusterfs process has a lot of FD open on each brick : 

lsof -n -P | grep deleted
ksh 644   cdui2w  REG 253,10 952
190 /users/mmr00/log/2704mmr.log (deleted)
glusterfs  2357   root   10u  REG  253,6   0
 28 /tmp/tmpfnMwo2I (deleted)
glusterfs  2361   root   10u  REG  253,6   0
 35 /tmp/tmpfnBlK2I (deleted)
glusterfs  2365   root   10u  REG  253,6   0
 41 /tmp/tmpfHZG51I (deleted)
glusterfs  2365   root   12u  REG  253,61011
 13 /tmp/tmpfPGJjje (deleted)
glusterfs  2365   root   13u  REG  253,61013
 20 /tmp/tmpf4ITi6m (deleted)
glusterfs  2365   root   17u  REG  253,61012
 25 /tmp/tmpfBwwE1h (deleted)
glusterfs  2365   root   18u  REG  253,61011
 43 /tmp/tmpfsoNSmV (deleted)
glusterfs  2365   root   19u  REG  253,61011
 19 /tmp/tmpfDmMruu (deleted)
glusterfs  2365   root   21u  REG  253,61012
 47 /tmp/tmpfE4SpVM (deleted)
glusterfs  2365   root   22u  REG  253,61012
 48 /tmp/tmpfHjjdXw (deleted)
glusterfs  2365   root   24u  REG  253,61011
 49 /tmp/tmpfwOoX6F (deleted)
glusterfs  2365   root   26u  REG 253,13 13509969920   
1829 /users3/poolsave/yval9000/test/tmp/23-04-18-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   27u  REG 253,13 13538048000   
1842 /users3/poolsave/yval9000/test/tmp/24-04-07-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   28u  REG 253,13 13607956480   
1737 /users3/poolsave/yval9000/test/tmp/22-04-01-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   30u  REG 253,13 13519441920   
1337 /users3/poolsave/yval9000/test/tmp/16-04-14-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   31u  REG 253,13 13530081280   
1342 /users3/poolsave/yval9000/test/tmp/16-04-16-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   34u  REG 253,13 13559777280   
1347 /users3/poolsave/yval9000/test/tmp/16-04-20-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   35u  REG 253,13 13581772800   
1352 /users3/poolsave/yval9000/test/tmp/16-04-23-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   36u  REG 253,13 13513922560   
1357 /users3/poolsave/yval9000/test/tmp/17-04-04-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   37u  REG 253,13 13513338880   
1362 /users3/poolsave/yval9000/test/tmp/17-04-07-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   38u  REG 253,13 13520199680   
1367 /users3/poolsave/yval9000/test/tmp/17-04-08-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   39u  REG 253,13 13576509440   
1372 /users3/poolsave/yval9000/test/tmp/18-04-00-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   40u  REG 253,13 13591869440   
1377 /users3/poolsave/yval9000/test/tmp/18-04-02-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   41u  REG 253,13 13592371200   
1382 /users3/poolsave/yval9000/test/tmp/18-04-04-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   42u  REG 253,13 13512202240   
1387 /users3/poolsave/yval9000/test/tmp/18-04-11-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   43u  REG 253,13 13528012800   
1392 /users3/poolsave/yval9000/test/tmp/18-04-19-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   44u  REG 253,13 13547735040   
1397 /users3/poolsave/yval9000/test/tmp/18-04-22-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   45u  REG 253,13 13574236160   
1402 /users3/poolsave/yval9000/test/tmp/19-04-03-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   46u  REG 253,13 13553295360   
1407 /users3/poolsave/yval9000/test/tmp/19-04-05-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   47u  REG 253,13 13542963200   
1416 /users3/poolsave/yval9000/test/tmp/19-04-13-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   48u  REG 253,13 13556346880   
1421 /users3/poolsave/yval9000/test/tmp/19-04-15-00_test_glusterfs_save.tar 
(deleted)
glusterfs  2365   root   49u  REG 253,13 13517445120   
1426 /users3/poolsave/yval9000/test/tmp/19-04-16-00_test_glusterfs_save.tar 
(deleted)

Re: [Gluster-users] Gluster-users Digest, Vol 48, Issue 43

2012-04-27 Thread anthony garnier

Gerald,

I'm currently using 3.2.5 under SLES 11. Yes my client is using NFS to connect 
to the server.

But once again the FD seems to stay open on both replica :

glusterfs  7002   root   22u  REG 253,13 13499473920
106 /users3/poolsave/yval9000/test/tmp/27-04-16-00_test_glusterfs_save.tar 
(deleted)


I checked in my script log : 


..
.
/users2/splunk/var/
/users2/splunk/var/log/
/users2/splunk/var/log/splunk/
Creation of file /users98/test/tmp/27-04-16-00_test_glusterfs_save.tar  
moving previous file in  /users98/test/gluster  
 = Seems to be someting related when the file is moved

And then : 

# cat netback_27-04-16-30.log
Simulation netback, tar in /dev/null
tar: Removing leading `/' from member names
/users98/test/gluster/27-04-16-00_test_glusterfs_save.tar
deleting archive 27-04-16-00_test_glusterfs_save.tar


Maybe we should open a Bug if we don't get a answer ?

Anthony,




 Message: 1
 Date: Fri, 27 Apr 2012 08:07:33 -0500 (CDT)
 From: Gerald Brandt g...@majentis.com
 Subject: Re: [Gluster-users] remote operation failed: No space left on
   device
 To: anthony garnier sokar6...@hotmail.com
 Cc: gluster-users@gluster.org
 Message-ID: 2490900.366.1335532031778.JavaMail.gbr@thinkpad
 Content-Type: text/plain; charset=utf-8
 
 Anthony,
 
 I have the exact same issue with GlusterFS 3.2.5 under Ubuntu 10.04.  I 
 haven't got an answer yet on what is happening.
 
 Are you using the NFS server s GlusterFS?
 
 Gerald
 
 
 - Original Message -
  From: anthony garnier sokar6...@hotmail.com
  To: gluster-users@gluster.org
  Sent: Friday, April 27, 2012 7:41:16 AM
  Subject: Re: [Gluster-users] remote operation failed: No space left on 
  device
  
  
  
  After some research, I found what's going on:
  seems that the glusterfs process has a lot of FD open on each brick :
  
  lsof -n -P | grep deleted
  ksh 644 cdui 2w REG 253,10 952 190 /users/mmr00/log/2704mmr.log
  (deleted)
  glusterfs 2357 root 10u REG 253,6 0 28 /tmp/tmpfnMwo2I (deleted)
  glusterfs 2361 root 10u REG 253,6 0 35 /tmp/tmpfnBlK2I (deleted)
  glusterfs 2365 root 10u REG 253,6 0 41 /tmp/tmpfHZG51I (deleted)
  glusterfs 2365 root 12u REG 253,6 1011 13 /tmp/tmpfPGJjje (deleted)
  glusterfs 2365 root 13u REG 253,6 1013 20 /tmp/tmpf4ITi6m (deleted)
  glusterfs 2365 root 17u REG 253,6 1012 25 /tmp/tmpfBwwE1h (deleted)
  glusterfs 2365 root 18u REG 253,6 1011 43 /tmp/tmpfsoNSmV (deleted)
  glusterfs 2365 root 19u REG 253,6 1011 19 /tmp/tmpfDmMruu (deleted)
  glusterfs 2365 root 21u REG 253,6 1012 47 /tmp/tmpfE4SpVM (deleted)
  glusterfs 2365 root 22u REG 253,6 1012 48 /tmp/tmpfHjjdXw (deleted)
  glusterfs 2365 root 24u REG 253,6 1011 49 /tmp/tmpfwOoX6F (deleted)
  glusterfs 2365 root 26u REG 253,13 13509969920 1829
  /users3/poolsave/yval9000/test/tmp/23-04-18-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 27u REG 253,13 13538048000 1842
  /users3/poolsave/yval9000/test/tmp/24-04-07-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 28u REG 253,13 13607956480 1737
  /users3/poolsave/yval9000/test/tmp/22-04-01-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 30u REG 253,13 13519441920 1337
  /users3/poolsave/yval9000/test/tmp/16-04-14-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 31u REG 253,13 13530081280 1342
  /users3/poolsave/yval9000/test/tmp/16-04-16-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 34u REG 253,13 13559777280 1347
  /users3/poolsave/yval9000/test/tmp/16-04-20-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 35u REG 253,13 13581772800 1352
  /users3/poolsave/yval9000/test/tmp/16-04-23-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 36u REG 253,13 13513922560 1357
  /users3/poolsave/yval9000/test/tmp/17-04-04-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 37u REG 253,13 13513338880 1362
  /users3/poolsave/yval9000/test/tmp/17-04-07-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 38u REG 253,13 13520199680 1367
  /users3/poolsave/yval9000/test/tmp/17-04-08-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 39u REG 253,13 13576509440 1372
  /users3/poolsave/yval9000/test/tmp/18-04-00-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 40u REG 253,13 13591869440 1377
  /users3/poolsave/yval9000/test/tmp/18-04-02-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 41u REG 253,13 13592371200 1382
  /users3/poolsave/yval9000/test/tmp/18-04-04-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 42u REG 253,13 13512202240 1387
  /users3/poolsave/yval9000/test/tmp/18-04-11-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 43u REG 253,13 13528012800 1392
  /users3/poolsave/yval9000/test/tmp/18-04-19-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365 root 44u REG 253,13 13547735040 1397
  /users3/poolsave/yval9000/test/tmp/18-04-22-00_test_glusterfs_save.tar
  (deleted)
  glusterfs 2365

[Gluster-users] Best practice and various system values

2012-04-16 Thread anthony garnier




Hi all,

I was just wondering if there is any best practice when you have a private 
Backend for GlusterFS ?

I also wanted to know various system values (like ulimit) you may have

Here is mine : 
# ulimit -a
core file size  (blocks, -c) 1
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 1029408
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) 112008152
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) unlimited
virtual memory  (kbytes, -v) 107097200
file locks  (-x) unlimited

vm.swappiness=60 
vm.vfs_cache_pressure=100
vm.dirty_background_ratio=10
vm.dirty_ratio  =40
sys/block/device/queue/scheduler = cfq (ssd drive)
/sys/block/device/queue/nr_requests = 128
 /proc/sys/vm/page-cluster   = 3
/proc/sys/net/ipv4/tcp_fin_timeout =30
/proc/sys/net/ipv4/tcp_rmem=4096262144  4194304
/proc/sys/net/ipv4/tcp_wmem  =4096262144  4194304
/proc/sys/net/ipv4/tcp_retries2 =15
/proc/sys/net/ipv4/tcp_keepalive_intvl   =75


Regards,

Anthony

  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Quota warning

2012-04-13 Thread anthony garnier

Amar,

Thank you for your help and sorry again to flood the list.

I filled a bug as you requested. ( 
https://bugzilla.redhat.com/show_bug.cgi?id=812230 )

Regards,

Anthony


 Date: Fri, 13 Apr 2012 11:28:47 +0530
 From: ama...@redhat.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Quota warning
 
 On 04/10/2012 08:36 PM, anthony garnier wrote:
  Hi developer team,
 
  I have a question about those warnings populating my logs :
 
  [2012-04-10 16:55:58.465576] W [quota.c:2165:quota_fstat_cbk]
  0-poolsave-quota: quota context not set in inode (ino:-7309437976,
  gfid:f73ee23f-a8a7-4edf-8b08-6e019b20fbf3)
  [2012-04-10 16:55:58.468463] W [quota.c:2165:quota_fstat_cbk]
  0-poolsave-quota: quota context not set in inode (ino:-7309437976,
  gfid:f73ee23f-a8a7-4edf-8b08-6e019b20fbf3)
 
  What does it mean ?
  Is there a possible link between those errors and the wrong quota
  displayed :
  path limit_set size
  --
  /yval9000 1TB 977.5GB
 
  but du show me :
 
  6.4G ./yval9000
 
 
 
 Not sure of this issue at the moment, can you please file a bug report ? 
 from there we can take it forward, and would help us to track it for the 
 future releases.
 
 
  Any help would be appreciated.
 
 Sure, we will try to see what would cause this behavior and get back.
 
 Regards,
 Amar
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] favorite-child option for split brain

2012-04-03 Thread anthony garnier

Hi everyone,

Do you know if we can still specify the favorite-child option (in gluster 
cli) to resolve split brain in 3.2.5 ?


Regards,

Anthony
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Directory depth limit of 16

2012-04-02 Thread anthony garnier

Hello folks,

I just wanted to know if the depth limit of 16 directory has been removed in 
3.2.6 ?

Thx
 
Anthony Garnier 

From: sokar6...@hotmail.com
To: gluster-users@gluster.org
Subject: FW: Quota  self healing
Date: Wed, 28 Mar 2012 09:39:58 +















Hi folks,

I'm currently running glusterFS on production on 4 servers 
Distributed-replicate and I my nfs logs get populated with :

2012-03-23 09:53:58.9619] W [quota.c:2165:quota_fstat_cbk] 0-poolsave-quota: 
quota context not set in inode (ino:-2365795820, 
gfid:c6853255-8c16-4e72-b544-5f58043a09b2)
[2012-03-23 09:53:58.11440] W [quota.c:2165:quota_fstat_cbk] 0-poolsave-quota: 
quota context not set in inode (ino:-2365795820, 
gfid:c6853255-8c16-4e72-b544-5f58043a09b2)

Is there a link between those errors above and the fact that displayed quota 
are often wrong ? e.g. quota set on a empty directory show me 30Go of data !


[2012-03-12 11:10:13.704397] E 
[afr-self-heal-metadata.c:512:afr_sh_metadata_fix] 0-venus-replicate-0: Unable 
to self-heal permissions/ownership of '/' (possible split-brain). Please fix 
the file on all backend volumes
[2012-03-12 11:10:13.704773] I 
[afr-self-heal-metadata.c:81:afr_sh_metadata_done] 0-venus-replicate-0: 
split-brain detected, aborting self heal of /
[2012-03-12 11:10:13.704819] E 
[afr-self-heal-common.c:2074:afr_self_heal_completion_cbk] 0-venus-replicate-0: 
background  meta-data self-heal failed on /


[2012-03-06 14:25:23.322265] W [client3_1-fops.c:4868:client3_1_finodelk] 
0-poolsave-client-2: (-2348122369): failed to get fd ctx. EBADFD
[2012-03-06 14:25:23.322299] W [client3_1-fops.c:4915:client3_1_finodelk] 
0-poolsave-client-2: failed to send the fop: File descriptor in bad state

[2012-03-06 16:06:39.424525] E [fd.c:465:fd_unref] 
(--/usr/local/lib/libglusterfs.so.0(default_create_cbk+0xb4) [0x7f781e7d5d84] 
(--/usr/local/lib//glusterfs/3.2.5/xlator/debug/io-stats.so(io_stats_create_cbk+0x20c)
 [0x7f781bd8c60c] 
(--/usr/local/lib//glusterfs/3.2.5/xlator/nfs/server.so(nfs_fop_create_cbk+0x73)
 [0x7f781bc416e3]))) 0-fd: fd is NULL


Any help would be appreciated

Anthony Garnier

  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Directory depth limit of 16

2012-04-02 Thread anthony garnier

Amar,

Obviously I was talking for NFS clients. Thank you for the info.
Does it mean that a file beyond this limit could be corrupted or just 
unreachable ?

Regards,

Anthony



 Date: Mon, 2 Apr 2012 17:13:10 +0530
 From: ama...@redhat.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Directory depth limit of 16
 
 On 04/02/2012 05:05 PM, anthony garnier wrote:
  Hello folks,
 
  I just wanted to know if the depth limit of 16 directory has been
  removed in 3.2.6 ?
 
 
 
 Hi Anthony,
 
 I guess you are talking of NFS clients (because the limit was never 
 there on FUSE mounts). The directory limit issue needed a significant 
 design change and hence we have not incorporated the fixes into 3.2.x 
 branch. This limit is removed in NFS on 3.3.x branch code only.
 
 Regards,
 Amar
 
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] glusterfs 3.3 beta2 ,Address family not supported

2012-01-05 Thread anthony garnier

Good morning folks,

I installed glusterfs 3.3 beta2 to use it with hadoop but after the 
installation, glusterd daemon cannot start : 

+--+
[2012-01-05 09:38:15.143286] W [glusterfsd.c:750:cleanup_and_exit] 
(--/lib64/libc.so.6(__clone+0x6d) [0x2ab886c5e42d] (--/lib64/libpthread.so.0 
[0x2ab886a8a2a3] (--/usr/local/sbin/glusterd(glusterfs_sigwaiter+0x17c) 
[0x404a3c]))) 0-: received signum (15), shutting down
[2012-01-05 09:44:55.746275] I [glusterd.c:574:init] 0-management: Using 
/etc/glusterd as working directory
[2012-01-05 09:44:55.755201] E [socket.c:2190:socket_listen] 
0-socket.management: socket creation failed (Address family not supported by 
protocol)
[2012-01-05 09:44:55.755392] I [glusterd.c:89:glusterd_uuid_init] 0-glusterd: 
retrieved UUID: aa4a61f1-3d7d-44e1-8939-b298b3164c07
Given volfile:
+--+
  1: volume management
  2: type mgmt/glusterd
  3: option working-directory /etc/glusterd
  4: option transport-type socket
  5: option transport.socket.keepalive-time 10
  6: option transport.socket.keepalive-interval 2
  7: option transport.socket.read-fail-log off
  8: end-volume

and obviously gluster cmd don't work :
# gluster peer status
Connection failed. Please check if gluster daemon is operational.

Ipv6 is disabled, I'm on SuSe 10 SP3

Thx

Anthony
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs 3.3 beta2 , Address family not supported

2012-01-05 Thread anthony garnier

Hi,

I found the solution on my own.
I just added this line in glusterd.vol :
option transport.address-family inet

Now everything seems to work fine. Sorry for the inconvenience.


Cheers,

Anthony


From: sokar6...@hotmail.com
To: gluster-users@gluster.org
Subject: glusterfs 3.3 beta2 ,Address family not supported
Date: Thu, 5 Jan 2012 09:43:50 +







Good morning folks,

I installed glusterfs 3.3 beta2 to use it with hadoop but after the 
installation, glusterd daemon cannot start : 

+--+
[2012-01-05 09:38:15.143286] W [glusterfsd.c:750:cleanup_and_exit] 
(--/lib64/libc.so.6(__clone+0x6d) [0x2ab886c5e42d] (--/lib64/libpthread.so.0 
[0x2ab886a8a2a3] (--/usr/local/sbin/glusterd(glusterfs_sigwaiter+0x17c) 
[0x404a3c]))) 0-: received signum (15), shutting down
[2012-01-05 09:44:55.746275] I [glusterd.c:574:init] 0-management: Using 
/etc/glusterd as working directory
[2012-01-05 09:44:55.755201] E [socket.c:2190:socket_listen] 
0-socket.management: socket creation failed (Address family not supported by 
protocol)
[2012-01-05 09:44:55.755392] I [glusterd.c:89:glusterd_uuid_init] 0-glusterd: 
retrieved UUID: aa4a61f1-3d7d-44e1-8939-b298b3164c07
Given volfile:
+--+
  1: volume management
  2: type mgmt/glusterd
  3: option working-directory /etc/glusterd
  4: option transport-type socket
  5: option transport.socket.keepalive-time 10
  6: option transport.socket.keepalive-interval 2
  7: option transport.socket.read-fail-log off
  8: end-volume

and obviously gluster cmd don't work :
# gluster peer status
Connection failed. Please check if gluster daemon is operational.

Ipv6 is disabled, I'm on SuSe 10 SP3

Thx

Anthony

  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Detecting split brain

2011-12-05 Thread anthony garnier

thank you for  your answer.
A last question for you, on some directory I got a little difference when I 
display extended attribute : 

getfattr -d -m trusted* -e hex *

Server1 # file: yvask300
trusted.gfid=0x433c796513864672871226072825336f
trusted.glusterfs.dht=0x00017fff

Server2 # file: yvask300
trusted.gfid=0x433c796513864672871226072825336f
trusted.glusterfs.dht=0x00017fff


Server3 # file: yvask300
trusted.afr.poolsave-client-2=0x0200
trusted.afr.poolsave-client-3=0x0200
trusted.gfid=0x433c796513864672871226072825336f
trusted.glusterfs.dht=0x00017ffe

Server4 # # file: yvask300
trusted.afr.poolsave-client-2=0x0200
trusted.afr.poolsave-client-3=0x0200
trusted.gfid=0x433c796513864672871226072825336f
trusted.glusterfs.dht=0x00017ffe

why  the attribute trusted.afr.VOLNAME-client-X is not displayed on Server 1 
 2 ?


Thx for your help.

Date: Fri, 2 Dec 2011 08:59:48 +0530
From: prani...@gluster.com
To: sokar6...@hotmail.com
CC: jda...@redhat.com; gluster-users@gluster.org
Subject: Re: [Gluster-users] Detecting split brain


  



  
  
On 12/01/2011 11:33 PM, anthony garnier wrote:

  
  
  
So if I understand well, a value different than
0s for the attribute
trusted.afr.poolsave-client-0 indicate split brain.



Thx



 Date: Thu, 1 Dec 2011 11:23:29 -0500

   From: jda...@redhat.com

   To: sokar6...@hotmail.com

   CC: gluster-users@gluster.org

   Subject: Re: [Gluster-users] Detecting split brain

   

   On Thu, 1 Dec 2011 16:00:09 +

   anthony garnier sokar6...@hotmail.com wrote:

   



Hi,



I got a lot of files with attributes :
  0sA



Serv 1 :

# file: tbo_rmr_globale_log_11-04-07_15h43m48s.log

trusted.afr.poolsave-client-0=0s

trusted.afr.poolsave-client-1=0s

trusted.gfid=0sfm7CRROuQ4+wuQfmHjFCdg==



Serv 2 :

# file: tbo_rmr_globale_log_11-04-07_15h43m48s.log

trusted.afr.poolsave-client-0=0s

trusted.afr.poolsave-client-1=0s

trusted.gfid=0sfm7CRROuQ4+wuQfmHjFCdg==



Does it mean that those files need Self-Healing ? I
  use GlusterFS

3.2.3

   

   This is actually normal. For some reason that would
  probably make me

   throw up if I knew it, getfattr misreports
  0x

   as 0s (which would be
  0x404040404040404040404040) if

   you don't give it the -e hex flag. The value is
  actually three

   four-byte integers, and if they're zero it means there
  are no pending

   operations. Any *other* value is likely to indicate split
  brain.


  
  

  
  

  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Small Correction:

Different value than 0sAAA.. means there are pending operations.

If you dump the values with -e hex option and there should be 24
digits, first 8 represent pending operations on the data of the
files (write/truncate). next 8 represent pending operations on
metadata (permissions, ownership etc) next 8 represent pending
operations on entry (creation/deletion/rename of a file inside that
directory). Now if both data/metadata/entry digits are non-zero on a
file then that will be split-brain.

Example:

Data split-brain:

 trusted.afr.poolsave-client-0=0s0001

 trusted.afr.poolsave-client-1=0s0020



Thanks

Pranith
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Detecting split brain

2011-12-01 Thread anthony garnier

Hi,

I got a lot of files with attributes : 0sA

Serv 1 :
# file: tbo_rmr_globale_log_11-04-07_15h43m48s.log
trusted.afr.poolsave-client-0=0s
trusted.afr.poolsave-client-1=0s
trusted.gfid=0sfm7CRROuQ4+wuQfmHjFCdg==

Serv 2 :
# file: tbo_rmr_globale_log_11-04-07_15h43m48s.log
trusted.afr.poolsave-client-0=0s
trusted.afr.poolsave-client-1=0s
trusted.gfid=0sfm7CRROuQ4+wuQfmHjFCdg==

Does it mean that those files need Self-Healing ? I use GlusterFS 3.2.3


Setup : 
Volume Name: poolsave
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ylal3550:/users3/poolsave
Brick2: ylal3570:/users3/poolsave
Brick3: ylal3560:/users3/poolsave
Brick4: ylal3580:/users3/poolsave
Options Reconfigured:
diagnostics.brick-log-level: ERROR
performance.io-thread-count: 64
nfs.port: 2049
performance.cache-refresh-timeout: 2
performance.cache-max-file-size: 4GB
performance.cache-min-file-size: 1KB
network.ping-timeout: 10
performance.cache-size: 6GB

Any help would be appreciated .



  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Detecting split brain

2011-12-01 Thread anthony garnier

So if I understand well, a value different than 0s for the 
attribute  trusted.afr.poolsave-client-0  indicate split brain.

Thx

 Date: Thu, 1 Dec 2011 11:23:29 -0500
 From: jda...@redhat.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Detecting split brain
 
 On Thu, 1 Dec 2011 16:00:09 +
 anthony garnier sokar6...@hotmail.com wrote:
 
  
  Hi,
  
  I got a lot of files with attributes : 0sA
  
  Serv 1 :
  # file: tbo_rmr_globale_log_11-04-07_15h43m48s.log
  trusted.afr.poolsave-client-0=0s
  trusted.afr.poolsave-client-1=0s
  trusted.gfid=0sfm7CRROuQ4+wuQfmHjFCdg==
  
  Serv 2 :
  # file: tbo_rmr_globale_log_11-04-07_15h43m48s.log
  trusted.afr.poolsave-client-0=0s
  trusted.afr.poolsave-client-1=0s
  trusted.gfid=0sfm7CRROuQ4+wuQfmHjFCdg==
  
  Does it mean that those files need Self-Healing ? I use GlusterFS
  3.2.3
 
 This is actually normal.  For some reason that would probably make me
 throw up if I knew it, getfattr misreports 0x
 as 0s (which would be 0x404040404040404040404040) if
 you don't give it the -e hex flag.  The value is actually three
 four-byte integers, and if they're zero it means there are no pending
 operations.  Any *other* value is likely to indicate split brain.
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Roadmap

2011-11-30 Thread anthony garnier

Hi,

Thank you for the answer. 
Sorry to bother you but I have an other question, is it possible to add in the 
documentation a post which explain how to deal with split brain situation ? I 
think it could be usefull ( 'cause I just cope with a split brain scenario 
today ^^).

Regards,
Anthony Garnier


 From: jwal...@gluster.com
 To: sokar6...@hotmail.com; gluster-users@gluster.org
 Subject: RE: [Gluster-users] Roadmap
 Date: Tue, 29 Nov 2011 18:33:23 +
 
 Hi there,
 
 The roadmap is currently undergoing a review. Look for some version of it to 
 be available next week and certainly in time for the Gluster.org webinar on 
 12/14.
 
 Thanks!
 John Mark Walker
 Gluster community guy
 
 
 From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] 
 on behalf of anthony garnier [sokar6...@hotmail.com]
 Sent: Tuesday, November 29, 2011 7:45 AM
 To: gluster-users@gluster.org
 Subject: [Gluster-users] Roadmap
 
 Hi folks,
 
 Do you know where I can find a up to date Roadmap for GlusterFS ?
 
 Regards,
 
 Anthony Garnier
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] UCARP with NFS

2011-09-09 Thread anthony garnier

I found the problem this morning !
It's because TCP connection are not reseted on Master Server and when the 
client come back on master, they enter both in a TCP DUP ACK storm. Need to 
kill all gluster process when Master comes up.
More info : https://bugzilla.redhat.com/show_bug.cgi?id=369991#c31

Anthony


 From: muel...@tropenklinik.de
 To: sokar6...@hotmail.com; whit.glus...@transpect.com
 CC: gluster-users@gluster.org
 Subject: AW: [Gluster-users] UCARP with NFS
 Date: Thu, 8 Sep 2011 16:13:24 +0200
 
 Cmd on slave : 
 usr/sbin/ucarp -z -B -M -b 1 -i bond0:0
 
 Did you try -b 7 at your cmd start. This solved for me the things in
 another configuration
 
 
 
 
 EDV Daniel Müller
 
 Leitung EDV
 Tropenklinik Paul-Lechler-Krankenhaus
 Paul-Lechler-Str. 24
 72076 Tübingen 
 Tel.: 07071/206-463, Fax: 07071/206-499
 eMail: muel...@tropenklinik.de
 Internet: www.tropenklinik.de 
 
 Von: gluster-users-boun...@gluster.org
 [mailto:gluster-users-boun...@gluster.org] Im Auftrag von anthony garnier
 Gesendet: Donnerstag, 8. September 2011 15:55
 An: whit.glus...@transpect.com
 Cc: gluster-users@gluster.org
 Betreff: Re: [Gluster-users] UCARP with NFS
 
 Whit,
 
 Here is my conf file : 
 #
 # Location of the ucarp pid file
 UCARP_PIDFILE=/var/run/ucarp0.pid
 
 # Define if this host is the prefered MASTER ( this aadd or remove the -P
 option)
 UCARP_MASTER=yes
 
 #
 # ucarp base, Interval monitoring time 
 #lower number will be perfered master
 # set to same to have master stay alive as long as possible
 UCARP_BASE=1
 
 #Priority [0-255]
 #lower number will be perfered master
 ADVSKEW=0
 
 
 #
 # Interface for Ipaddress
 INTERFACE=bond0:0
 
 #
 # Instance id
 # any number from 1 to 255
 # Master and Backup need to be the same
 INSTANCE_ID=42
 
 #
 # Password so servers can trust who they are talking to
 PASSWORD=glusterfs
 
 #
 # The Application Address that will failover
 VIRTUAL_ADDRESS=10.68.217.3
 VIRTUAL_BROADCAST=10.68.217.255
 VIRTUAL_NETMASK=255.255.255.0
 #
 
 #Script for configuring interface
 UPSCRIPT=/etc/ucarp/script/vip-up.sh
 DOWNSCRIPT=/etc/ucarp/script/vip-down.sh
 
 # The Maintanence Address of the local machine
 SOURCE_ADDRESS=10.68.217.85
 
 
 Cmd on master : 
 /usr/sbin/ucarp -z -B -P -b 1 -i bond0:0 -v 42 -p glusterfs -k 0 -a
 10.68.217.3 -s 10.68.217.85 --upscript=/etc/ucarp/script/vip-up.sh
 --downscript=/etc/ucarp/script/vip-down.sh
 
 Cmd on slave : 
 usr/sbin/ucarp -z -B -M -b 1 -i bond0:0 \ -v 42 -p glusterfs -k 50 -a
 10.68.217.3 -s 10.68.217.86 --upscript=/etc/ucarp/script/vip-up.sh
 --downscript=/etc/ucarp/script/vip-down.sh
 
 
 To me, to have a prefered master is necessary because I'm using RR DNS and I
 want to do a kind of active/active failover.I'll explain the whole idea : 
 
 SERVER 1--- SERVER 2
 VIP1 VIP2
 
 When I access the URL glusterfs.preprod.inetpsa.com, RRDNS gives me one of
 the VIP(load balancing). The main problem here  is if I use only RRDNS, if a
 server goes down the client currently binded on this server will fail to. So
 to avoid that I need a VIP failover. 
 By this way, If a server goes down, all the client on this server will be
 binded on the other one. Because I want loadbalacing, I need a prefered
 master, so by default need that VIP 1 stay on server 1 and VIP 2 stay on
 server 2.
 Currently I trying to make it works with one VIP only.
 
 
 Anthony
 
  Date: Thu, 8 Sep 2011 09:32:59 -0400
  From: whit.glus...@transpect.com
  To: sokar6...@hotmail.com
  CC: gluster-users@gluster.org
  Subject: Re: [Gluster-users] UCARP with NFS
  
  On Thu, Sep 08, 2011 at 01:02:41PM +, anthony garnier wrote:
  
   I got a client mounted on the VIP, when the Master fall, the client
 switch
   automaticaly on the Slave with almost no delay, it works like a charm.
 But when
   the Master come back up, the mount point on the client freeze.
   I've done a monitoring with tcpdump, when the master came up, the client
 send
   paquets on the master but the master seems to not establish the TCP
 connection.
  
  Anthony,
  
  Your UCARP command line choices and scripts would be worth looking at
 here.
  There are different UCARP behavior options for when the master comes back
  up. If the initial failover works fine, it may be that you'll have better
  results if you don't have a preferred master. That is, you can either have
  UCARP set so that the slave relinquishes the IP back to the master when
 the
  master comes back up, or you can have UCARP set so that the slave becomes
  the new master, until such time as the new master goes down, in which case
  the former master becomes master again.
  
  If you're doing it the first way, there may be a brief overlap, where both
  systems claim the VIP. That may be where your mount is failing. By doing
 it
  the second way, where the VIP is held by whichever system has it until
 that
  system actually goes down, there's no overlap. There shouldn't be a
 reason,
  in the Gluster context

[Gluster-users] UCARP with NFS

2011-09-08 Thread anthony garnier

Hi all,

I'm currently trying to deploy ucarp with GlusterFS, especially with nfs access.
Ucarp works well when I ping the VIP and i shutdown Master (and then when the 
Master came back up),but I face of a problem with NFS connections.

I got a client mounted on the VIP, when the Master fall, the client switch 
automaticaly on the Slave with almost no delay, it works like a charm. But when 
the Master come back up, the mount point on the client freeze.
I've done a monitoring with tcpdump, when the master came up, the client send 
paquets on the master but the master seems to not establish the TCP connection.


My  volume config : 

Volume Name: hermes
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal2950:/users/export
Brick2: ylal2960:/users/export
Options Reconfigured:
performance.cache-size: 1GB
performance.cache-refresh-timeout: 60
network.ping-timeout: 25
nfs.port: 2049

As Craig wrote previously, I well probed hosts and created the volume with 
their real IP, I only used the VIP with the client.

Does anyone got experience with UCARP and GlusterFS ?

Anthony 
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] UCARP with NFS

2011-09-08 Thread anthony garnier

I already have those options enabled. 
But thx anyway

 From: muel...@tropenklinik.de
 To: sokar6...@hotmail.com; gluster-users@gluster.org
 Subject: AW: [Gluster-users] UCARP with NFS
 Date: Thu, 8 Sep 2011 15:25:18 +0200
 
 Yes,
 working for me on centos 5.6 samba/glusterfs/ext3.
 Seems to me ucarp is not configured to work master/slave:
 Master:
 
 ID=1
 BIND_INTERFACE=eth1
 #Real IP
 SOURCE_ADDRESS=xxx.xxx.xxx.xxx
 #slaveconfig,OPTIONS=--shutdown --preempt  -b 1 -k 50
 OPTIONS=--shutdown --preempt  -b 1
 #Virtual IP used by ucarp
 VIP_ADDRESS=zzz.zzz.zzz.zzz
 #Ucarp Password
 PASSWORD=
 
 On you slave you need: OPTIONS=--shutdown --preempt  -b 1 -k 50
 
 Good luck
 Daniel
 
 
 
 EDV Daniel Müller
 
 Leitung EDV
 Tropenklinik Paul-Lechler-Krankenhaus
 Paul-Lechler-Str. 24
 72076 Tübingen 
 Tel.: 07071/206-463, Fax: 07071/206-499
 eMail: muel...@tropenklinik.de
 Internet: www.tropenklinik.de 
 
 Von: gluster-users-boun...@gluster.org
 [mailto:gluster-users-boun...@gluster.org] Im Auftrag von anthony garnier
 Gesendet: Donnerstag, 8. September 2011 15:03
 An: gluster-users@gluster.org
 Betreff: [Gluster-users] UCARP with NFS
 
 Hi all,
 
 I'm currently trying to deploy ucarp with GlusterFS, especially with nfs
 access.
 Ucarp works well when I ping the VIP and i shutdown Master (and then when
 the Master came back up),but I face of a problem with NFS connections.
 
 I got a client mounted on the VIP, when the Master fall, the client switch
 automaticaly on the Slave with almost no delay, it works like a charm. But
 when the Master come back up, the mount point on the client freeze.
 I've done a monitoring with tcpdump, when the master came up, the client
 send paquets on the master but the master seems to not establish the TCP
 connection.
 
 
 My  volume config : 
 
 Volume Name: hermes
 Type: Replicate
 Status: Started
 Number of Bricks: 2
 Transport-type: tcp
 Bricks:
 Brick1: ylal2950:/users/export
 Brick2: ylal2960:/users/export
 Options Reconfigured:
 performance.cache-size: 1GB
 performance.cache-refresh-timeout: 60
 network.ping-timeout: 25
 nfs.port: 2049
 
 As Craig wrote previously, I well probed hosts and created the volume with
 their real IP, I only used the VIP with the client.
 
 Does anyone got experience with UCARP and GlusterFS ?
 
 Anthony 
 
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] UCARP with NFS

2011-09-08 Thread anthony garnier

Whit,

Here is my conf file : 
#
# Location of the ucarp pid file
UCARP_PIDFILE=/var/run/ucarp0.pid

# Define if this host is the prefered MASTER ( this aadd or remove the -P 
option)
UCARP_MASTER=yes

#
# ucarp base, Interval monitoring time 
#lower number will be perfered master
# set to same to have master stay alive as long as possible
UCARP_BASE=1

#Priority [0-255]
#lower number will be perfered master
ADVSKEW=0


#
# Interface for Ipaddress
INTERFACE=bond0:0

#
# Instance id
# any number from 1 to 255
# Master and Backup need to be the same
INSTANCE_ID=42

#
# Password so servers can trust who they are talking to
PASSWORD=glusterfs

#
# The Application Address that will failover
VIRTUAL_ADDRESS=10.68.217.3
VIRTUAL_BROADCAST=10.68.217.255
VIRTUAL_NETMASK=255.255.255.0
#

#Script for configuring interface
UPSCRIPT=/etc/ucarp/script/vip-up.sh
DOWNSCRIPT=/etc/ucarp/script/vip-down.sh

# The Maintanence Address of the local machine
SOURCE_ADDRESS=10.68.217.85


Cmd on master : 
/usr/sbin/ucarp -z -B -P -b 1 -i bond0:0 -v 42 -p glusterfs -k 0 -a 10.68.217.3 
-s 10.68.217.85 --upscript=/etc/ucarp/script/vip-up.sh 
--downscript=/etc/ucarp/script/vip-down.sh

Cmd on slave : 
usr/sbin/ucarp -z -B -M -b 1 -i bond0:0 \ -v 42 -p glusterfs -k 50 -a 
10.68.217.3 -s 10.68.217.86 --upscript=/etc/ucarp/script/vip-up.sh 
--downscript=/etc/ucarp/script/vip-down.sh


To me, to have a prefered master is necessary because I'm using RR DNS and I 
want to do a kind of active/active failover.I'll explain the whole idea : 

SERVER 1--- SERVER 2
VIP1 VIP2

When I access the URL glusterfs.preprod.inetpsa.com, RRDNS gives me one of the 
VIP(load balancing). The main problem here  is if I use only RRDNS, if a server 
goes down the client currently binded on this server will fail to. So to avoid 
that I need a VIP failover. 
By this way, If a server goes down, all the client on this server will be 
binded on the other one. Because I want loadbalacing, I need a prefered master, 
so by default need that VIP 1 stay on server 1 and VIP 2 stay on server 2.
Currently I trying to make it works with one VIP only.


Anthony


 Date: Thu, 8 Sep 2011 09:32:59 -0400
 From: whit.glus...@transpect.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org
 Subject: Re: [Gluster-users] UCARP with NFS
 
 On Thu, Sep 08, 2011 at 01:02:41PM +, anthony garnier wrote:
 
  I got a client mounted on the VIP, when the Master fall, the client switch
  automaticaly on the Slave with almost no delay, it works like a charm. But 
  when
  the Master come back up, the mount point on the client freeze.
  I've done a monitoring with tcpdump, when the master came up, the client 
  send
  paquets on the master but the master seems to not establish the TCP 
  connection.
 
 Anthony,
 
 Your UCARP command line choices and scripts would be worth looking at here.
 There are different UCARP behavior options for when the master comes back
 up. If the initial failover works fine, it may be that you'll have better
 results if you don't have a preferred master. That is, you can either have
 UCARP set so that the slave relinquishes the IP back to the master when the
 master comes back up, or you can have UCARP set so that the slave becomes
 the new master, until such time as the new master goes down, in which case
 the former master becomes master again.
 
 If you're doing it the first way, there may be a brief overlap, where both
 systems claim the VIP. That may be where your mount is failing. By doing it
 the second way, where the VIP is held by whichever system has it until that
 system actually goes down, there's no overlap. There shouldn't be a reason,
 in the Gluster context, to care which system is master, is there?
 
 Whit
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Quota calculation

2011-09-06 Thread anthony garnier

Hi Junaid,

Sorry I was not able to reach you on IRC, I just see your email this morning 
because of the time difference.
I'll try to delete the volume and the backend and start from scratch. I'll keep 
you in touch.

Anthony
From: jun...@gluster.com
Date: Tue, 6 Sep 2011 12:38:48 +0530
Subject: Quota calculation
To: sokar6...@hotmail.com
CC: gluster-users@gluster.org

Hi Anthony,
Previously, you had a volume which was pure replicate and it had the servers
Volume Name: venus



Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal3020:/soft/venus
Brick2: ylal3030:/soft/venus



and then you created another distributed-replicate volume which used the bricks 
of previous volume as part of the  new volume




Volume Name: venus



Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ylal3020:/soft/venus
Brick2: ylal3030:/soft/venus
Brick3: yval1000:/soft/venus
Brick4: yval1010:/soft/venus




So, what I suspect is the extended attributes were carried forward as part of 
the of the new volume. So, when a file is created in the new volume, for quota 
it will the extended attribute value + the new file size. If this is true in 
your case, then please use fresh backend directories(newly created directories) 
and kindly report if you see the same behavior.



Junaid
On Mon, Sep 5, 2011 at 11:45 PM, Mohammed Junaid jun...@gluster.com wrote:



Hi Anthony,
Are  you reachable on #gluster  irc, it will be easy to debug this issue on 
irc. I will be available at gluster irc after 11am IST. I will reply to your 
mail but it will speedup the debug process if you are on irc.




Junaid

On Mon, Sep 5, 2011 at 8:03 PM, anthony garnier sokar6...@hotmail.com wrote:












Hi Junaid,

Sorry about the confusion, indeed I gave you the
wrong output. So let's start to the beginning. I disabled quota and I
reactivated it

My configuration : 

Volume Name: venus
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ylal3020:/soft/venus
Brick2: ylal3030:/soft/venus




Brick3: yval1000:/soft/venus
Brick4: yval1010:/soft/venus
Options Reconfigured:
nfs.port: 2049
performance.cache-refresh-timeout: 60
performance.cache-size: 1GB
network.ping-timeout: 10
features.quota: on




features.limit-usage: /test:100MB,/psa:200MB,/:7GB,/soft:5GB
features.quota-timeout: 120

Size of each folder from the mount point : 
/test  : 4.1MB




/psa  : 160MB
/soft  : 1.2GB
Total size 1.4GB
(If you want the complete output of  du, don't hesitate)


gluster volume quota venus list
path  limit_set size
--




/psa  200MB  167.0MB   =OK
/soft   5GB4.7GB= NO OK
/test 100MB4.0MB  = OK




/   7GB5.2GB = NO OK

Is seems this is 4 time the original size for / and /soft


Getfattr for the four nodes : 

ylal3020:/etc/ucarp/interface # getfattr -n trusted.glusterfs.quota.size -e hex 
/soft/venus ; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/test 
; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa;getfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/soft




getfattr: Removing leading '/' from absolute path names
# file: soft/venus
trusted.glusterfs.quota.size=0xb3ad6c00

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/test




trusted.glusterfs.quota.size=0x001f8000

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/psa
trusted.glusterfs.quota.size=0x04f62c00

getfattr: Removing leading '/' from absolute path names




# file: soft/venus/soft
trusted.glusterfs.quota.size=0x98f4b200


ylal3030:/etc/ucarp/interface # getfattr -n trusted.glusterfs.quota.size -e hex 
/soft/venus ; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/test 
; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa;getfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/soft




getfattr: Removing leading '/' from absolute path names
# file: soft/venus
trusted.glusterfs.quota.size=0xb38e1c00

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/test




trusted.glusterfs.quota.size=0x001f8000

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/psa
trusted.glusterfs.quota.size=0x04f62c00

getfattr: Removing leading '/' from absolute path names




# file: soft/venus/soft
trusted.glusterfs.quota.size=0x98b5e200


yval1000:/soft/venus # getfattr -n trusted.glusterfs.quota.size -e hex 
/soft/venus ; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/test 
; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa;getfattr

[Gluster-users] How to know clients currently connected ?

2011-09-06 Thread anthony garnier

Hi folks,

Maybe it's a strange question but I 'd like to know if there is a way to see 
clients currently connected to the filesystem ( with NFS) ?

Other question, except ucarp or heartbeat, is there an other way to do HA in 
NFS ? ( I mean when client is connected to a server and then the server crash, 
the client will be binded to an other one without remounting the share.


Anthony
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] How to know clients currently connected ?

2011-09-06 Thread anthony garnier

Hi Whit,

In my understanding of autofs is that it will mount as you need the filesystem 
but will it remount the FS when there is a server crash ? 


Anthony

 Date: Tue, 6 Sep 2011 08:34:11 -0400
 From: whit.glus...@transpect.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org
 Subject: Re: [Gluster-users] How to know clients currently connected ?
 
 Hi Anthony,
 
 If you need the client to remain mounted, you need _some_ way of doing IP
 takeover. You could write your own script for that.
 
 As for remounting though, if your client is a *nix (Linux, OSX, whatever)
 you can use autofs to establish the mount, and that will also handle
 remounting automatically. Not sure if there's an autofs-type option for
 Windows.
 
 Whit
 
 
 On Tue, Sep 06, 2011 at 12:21:07PM +, anthony garnier wrote:
 Hi folks,
  
 Maybe it's a strange question but I 'd like to know if there is a way to
 see clients currently connected to the filesystem ( with NFS) ?
  
 Other question, except ucarp or heartbeat, is there an other way to do HA
 in NFS ? ( I mean when client is connected to a server and then the 
  server
 crash, the client will be binded to an other one without remounting the
 share.
  
 Anthony
 
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] How to know clients currently connected ?

2011-09-06 Thread anthony garnier

Whit,
Thank you for those answers. I'll investigate UCARP and autofs and give you 
feedback

Anthony


 Date: Tue, 6 Sep 2011 09:00:31 -0400
 From: whit.glus...@transpect.com
 To: sokar6...@hotmail.com
 CC: whit.glus...@transpect.com; gluster-users@gluster.org
 Subject: Re: [Gluster-users] How to know clients currently connected ?
 
 In my understanding of autofs is that it will mount as you need the
 filesystem but will it remount the FS when there is a server crash ?
 
 Anthony,
 
 I haven't specifically tested it in this precise context, but in general
 that's exactly what it does. We went to using autofs because we had a server
 that was prone to crash. With autofs, as soon as that server came back up,
 it got remounted. Very reliable for that.
 
 When a server crashes the mount goes away. When you ask for a filesystem
 with autofs, if it's not there, it tries to mount it. I don't see how it can
 even know if you've switched which system has the IP and filesystem it's
 trying to mount. I know for sure it works well in a setup with
 DRBD/Heartbeat/NFS, so it ought to work as well for Gluster/UCARP/NFS. From
 autofs's perspective, it's just NFS.
 
 Whit
 
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] nfs port in 3.1.6

2011-09-05 Thread anthony garnier

Hi Kaushik,

Thank you very much.


Anthony

From: kaushi...@gluster.com
Date: Mon, 5 Sep 2011 11:16:56 +0530
Subject: Re: [Gluster-users] nfs port in 3.1.6
To: sokar6...@hotmail.com
CC: gluster-users@gluster.org

Hi Anthony,
This is a known bug and is tracked in the following BUG-3414. The fix has been 
merged into the 3.1 branch code base, and would be available in the next 
release.


Regards,Kaushik BV


  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Quota calculation

2011-09-05 Thread anthony garnier




Hi Junaid,

Sorry about the confusion, indeed I gave you the
wrong output. So let's start to the beginning. I disabled quota and I
reactivated it

My configuration : 

Volume Name: venus
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ylal3020:/soft/venus
Brick2: ylal3030:/soft/venus
Brick3: yval1000:/soft/venus
Brick4: yval1010:/soft/venus
Options Reconfigured:
nfs.port: 2049
performance.cache-refresh-timeout: 60
performance.cache-size: 1GB
network.ping-timeout: 10
features.quota: on
features.limit-usage: /test:100MB,/psa:200MB,/:7GB,/soft:5GB
features.quota-timeout: 120

Size of each folder from the mount point : 
/test  : 4.1MB
/psa  : 160MB
/soft  : 1.2GB
Total size 1.4GB
(If you want the complete output of  du, don't hesitate)


gluster volume quota venus list
path  limit_set size
--
/psa  200MB  167.0MB   =OK
/soft   5GB4.7GB= NO OK
/test 100MB4.0MB  = OK
/   7GB5.2GB = NO OK

Is seems this is 4 time the original size for / and /soft


Getfattr for the four nodes : 

ylal3020:/etc/ucarp/interface # getfattr -n trusted.glusterfs.quota.size -e hex 
/soft/venus ; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/test 
; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa;getfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/soft
getfattr: Removing leading '/' from absolute path names
# file: soft/venus
trusted.glusterfs.quota.size=0xb3ad6c00

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/test
trusted.glusterfs.quota.size=0x001f8000

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/psa
trusted.glusterfs.quota.size=0x04f62c00

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/soft
trusted.glusterfs.quota.size=0x98f4b200


ylal3030:/etc/ucarp/interface # getfattr -n trusted.glusterfs.quota.size -e hex 
/soft/venus ; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/test 
; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa;getfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/soft
getfattr: Removing leading '/' from absolute path names
# file: soft/venus
trusted.glusterfs.quota.size=0xb38e1c00

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/test
trusted.glusterfs.quota.size=0x001f8000

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/psa
trusted.glusterfs.quota.size=0x04f62c00

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/soft
trusted.glusterfs.quota.size=0x98b5e200


yval1000:/soft/venus # getfattr -n trusted.glusterfs.quota.size -e hex 
/soft/venus ; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/test 
; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa;getfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/soft
getfattr: Removing leading '/' from absolute path names
# file: soft/venus
trusted.glusterfs.quota.size=0x98a8ac00

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/test
trusted.glusterfs.quota.size=0x0021

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/psa
trusted.glusterfs.quota.size=0x0579e000

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/soft
trusted.glusterfs.quota.size=0x930dcc00


yval1010:/soft/venus # getfattr -n trusted.glusterfs.quota.size -e hex 
/soft/venus ; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/test 
; getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa;getfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/soft
getfattr: Removing leading '/' from absolute path names
# file: soft/venus
trusted.glusterfs.quota.size=0x98a2ac00

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/test
trusted.glusterfs.quota.size=0x0021

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/psa
trusted.glusterfs.quota.size=0x0579e000

getfattr: Removing leading '/' from absolute path names
# file: soft/venus/soft
trusted.glusterfs.quota.size=0x9307cc00

















  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] quota calculation gluster 3.2.3

2011-09-02 Thread anthony garnier

Hi Junaid,

Here is the output of the diffrent command executed on both server.
Command show identical output on both server

# getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus
getfattr: Removing leading '/' from absolute path names
# file: soft/venus
trusted.glusterfs.quota.size=0x22ccac00

 # getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/test
getfattr: Removing leading '/' from absolute path names
# file: soft/venus/test
trusted.glusterfs.quota.size=0x

 # getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa 
getfattr: Removing leading '/' from absolute path names
# file: soft/venus/psa
trusted.glusterfs.quota.size=0x0013d000

 # getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/soft
getfattr: Removing leading '/' from absolute path names
# file: soft/venus/soft
trusted.glusterfs.quota.size=0x0d15ce00


# gluster volume quota venus list
path  limit_set size
--
/   2GB  556.8MB
/test 100MB   0Bytes
/psa  200MB1.2MB
/soft1500MB  209.4MB


Indeed there is 90% of small file (~6 files).

Thx

Anthony


From: jun...@gluster.com
Date: Fri, 2 Sep 2011 06:18:30 +0530
Subject: Re: [Gluster-users] quota calculation gluster 3.2.3
To: sokar6...@hotmail.com
CC: gluster-users@gluster.org

Hi Anthony,
To debug this further, can you send the output of 
getfattr -n trusted.glusterfs.quota.size -e hex /soft/venusgetfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/test

getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psagetfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/soft


from both the machine. Also check the output of
gluster volume quota Replicate list


Sometimes it takes a small amount of time to bring up the sizes. Also, what 
kind of data where you creating (I mean large amount of small files or large 
files because in case of small files, the directory sizes are not accounted by 
quota to calculate the size unlike the du -h command which uses the directory 
size as well).



Junaid


On Thu, Sep 1, 2011 at 1:37 PM, anthony garnier sokar6...@hotmail.com wrote:







Hi all,

I've enable quota but I'm a bit confused by values displayed by GlusterFS

Here is my volume : 

Volume Name: venus
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp


Bricks:
Brick1: ylal3020:/soft/venus
Brick2: ylal3030:/soft/venus
Options Reconfigured:
features.limit-usage: /test:100MB,/psa:200MB,/soft:1500MB,/:2GB
features.quota: on
diagnostics.client-log-level: ERROR


diagnostics.brick-log-level: ERROR
network.ping-timeout: 10
performance.cache-size: 2GB
nfs.port: 2049

I've got 3 folders in the backend (/soft/venus) : 
psa160MB  (with du -sh)
soft 1.2GB


test 12KB
Total   1.4GB

But when I list the quota with gluster I got : 
# gluster volume quota venus list
path  limit_set size
--


/test 100MB   12.0KB  = This 
one is OK
/psa  200MB   64.4MB   = not OK
/soft1500MB  281.8MB   = not OK


/   2GB  346.2MB   = not OK

Any idea ?

Regards,

Anthony Garnier



  

___

Gluster-users mailing list

Gluster-users@gluster.org

http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] quota calculation gluster 3.2.3

2011-09-02 Thread anthony garnier

I forgot to check on the 2 other server.
So ylal3020 and ylal3030 show similar output send previously.

But yval1000 and yval1010 show with all command :
 trusted.glusterfs.quota.size=0x

Is it normal ?

Regards,

Anthony
From: sokar6...@hotmail.com
To: jun...@gluster.com
CC: gluster-users@gluster.org
Subject: RE: [Gluster-users] quota calculation gluster 3.2.3
Date: Fri, 2 Sep 2011 07:51:52 +








Hi Junaid,

Here is the output of the diffrent command executed on both server.
Command show identical output on both server

# getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus
getfattr: Removing leading '/' from absolute path names
# file: soft/venus
trusted.glusterfs.quota.size=0x22ccac00

 # getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/test
getfattr: Removing leading '/' from absolute path names
# file: soft/venus/test
trusted.glusterfs.quota.size=0x

 # getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa 
getfattr: Removing leading '/' from absolute path names
# file: soft/venus/psa
trusted.glusterfs.quota.size=0x0013d000

 # getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/soft
getfattr: Removing leading '/' from absolute path names
# file: soft/venus/soft
trusted.glusterfs.quota.size=0x0d15ce00


# gluster volume quota venus list
path  limit_set size
--
/   2GB  556.8MB
/test 100MB   0Bytes
/psa  200MB1.2MB
/soft1500MB  209.4MB


Indeed there is 90% of small file (~6 files).

Thx

Anthony


From: jun...@gluster.com
Date: Fri, 2 Sep 2011 06:18:30 +0530
Subject: Re: [Gluster-users] quota calculation gluster 3.2.3
To: sokar6...@hotmail.com
CC: gluster-users@gluster.org

Hi Anthony,
To debug this further, can you send the output of 
getfattr -n trusted.glusterfs.quota.size -e hex /soft/venusgetfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/test

getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psagetfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/soft


from both the machine. Also check the output of
gluster volume quota Replicate list


Sometimes it takes a small amount of time to bring up the sizes. Also, what 
kind of data where you creating (I mean large amount of small files or large 
files because in case of small files, the directory sizes are not accounted by 
quota to calculate the size unlike the du -h command which uses the directory 
size as well).



Junaid


On Thu, Sep 1, 2011 at 1:37 PM, anthony garnier sokar6...@hotmail.com wrote:







Hi all,

I've enable quota but I'm a bit confused by values displayed by GlusterFS

Here is my volume : 

Volume Name: venus
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp


Bricks:
Brick1: ylal3020:/soft/venus
Brick2: ylal3030:/soft/venus
Options Reconfigured:
features.limit-usage: /test:100MB,/psa:200MB,/soft:1500MB,/:2GB
features.quota: on
diagnostics.client-log-level: ERROR


diagnostics.brick-log-level: ERROR
network.ping-timeout: 10
performance.cache-size: 2GB
nfs.port: 2049

I've got 3 folders in the backend (/soft/venus) : 
psa160MB  (with du -sh)
soft 1.2GB


test 12KB
Total   1.4GB

But when I list the quota with gluster I got : 
# gluster volume quota venus list
path  limit_set size
--


/test 100MB   12.0KB  = This 
one is OK
/psa  200MB   64.4MB   = not OK
/soft1500MB  281.8MB   = not OK


/   2GB  346.2MB   = not OK

Any idea ?

Regards,

Anthony Garnier



  

___

Gluster-users mailing list

Gluster-users@gluster.org

http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] quota calculation gluster 3.2.3

2011-09-02 Thread anthony garnier









I've done a ls -lRa throught the mount point and now i get : 
/soft/venus # gluster volume quota venus list
path  limit_set size
--
/   2GB1.3GB
/test 100MB8.0KB
/psa  200MB  151.0MB
/soft1500MB  826.3MB

=Seems to be consistent

and now command  getfattr on yval1000 and yval1010 show : 

yval1000 : 
getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/   
trusted.glusterfs.quota.size=0x28f9ca00

getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/soft
trusted.glusterfs.quota.size=0x1421d400

getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa
trusted.glusterfs.quota.size=0x053d6000

 
yval1010 : 
getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/   
trusted.glusterfs.quota.size=0x2904ea00

getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/soft
trusted.glusterfs.quota.size=0x2292ba00

getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa
trusted.glusterfs.quota.size=0x053d6000

Regards,

Anthony

From: sokar6...@hotmail.com
To: jun...@gluster.com
CC: gluster-users@gluster.org
Subject: RE: [Gluster-users] quota calculation gluster 3.2.3
Date: Fri, 2 Sep 2011 07:57:19 +








I forgot to check on the 2 other server.
So ylal3020 and ylal3030 show similar output send previously.

But yval1000 and yval1010 show with all command :
 trusted.glusterfs.quota.size=0x

Is it normal ?

Regards,

Anthony
From: sokar6...@hotmail.com
To: jun...@gluster.com
CC: gluster-users@gluster.org
Subject: RE: [Gluster-users] quota calculation gluster 3.2.3
Date: Fri, 2 Sep 2011 07:51:52 +








Hi Junaid,

Here is the output of the diffrent command executed on both server.
Command show identical output on both server

# getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus
getfattr: Removing leading '/' from absolute path names
# file: soft/venus
trusted.glusterfs.quota.size=0x22ccac00

 # getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/test
getfattr: Removing leading '/' from absolute path names
# file: soft/venus/test
trusted.glusterfs.quota.size=0x

 # getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psa 
getfattr: Removing leading '/' from absolute path names
# file: soft/venus/psa
trusted.glusterfs.quota.size=0x0013d000

 # getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/soft
getfattr: Removing leading '/' from absolute path names
# file: soft/venus/soft
trusted.glusterfs.quota.size=0x0d15ce00


# gluster volume quota venus list
path  limit_set size
--
/   2GB  556.8MB
/test 100MB   0Bytes
/psa  200MB1.2MB
/soft1500MB  209.4MB


Indeed there is 90% of small file (~6 files).

Thx

Anthony


From: jun...@gluster.com
Date: Fri, 2 Sep 2011 06:18:30 +0530
Subject: Re: [Gluster-users] quota calculation gluster 3.2.3
To: sokar6...@hotmail.com
CC: gluster-users@gluster.org

Hi Anthony,
To debug this further, can you send the output of 
getfattr -n trusted.glusterfs.quota.size -e hex /soft/venusgetfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/test

getfattr -n trusted.glusterfs.quota.size -e hex /soft/venus/psagetfattr -n 
trusted.glusterfs.quota.size -e hex /soft/venus/soft


from both the machine. Also check the output of
gluster volume quota Replicate list


Sometimes it takes a small amount of time to bring up the sizes. Also, what 
kind of data where you creating (I mean large amount of small files or large 
files because in case of small files, the directory sizes are not accounted by 
quota to calculate the size unlike the du -h command which uses the directory 
size as well).



Junaid


On Thu, Sep 1, 2011 at 1:37 PM, anthony garnier sokar6...@hotmail.com wrote:







Hi all,

I've enable quota but I'm a bit confused by values displayed by GlusterFS

Here is my volume : 

Volume Name: venus
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp


Bricks:
Brick1: ylal3020:/soft/venus
Brick2: ylal3030:/soft/venus
Options Reconfigured:
features.limit-usage: /test:100MB,/psa:200MB,/soft:1500MB,/:2GB
features.quota: on
diagnostics.client-log-level: ERROR


diagnostics.brick-log-level: ERROR
network.ping-timeout: 10
performance.cache-size: 2GB
nfs.port: 2049

I've got 3 folders in the backend (/soft/venus) : 
psa160MB  (with du -sh)
soft 1.2GB


test 12KB
Total   1.4GB

But when I list the quota with gluster I got : 
# gluster volume quota venus list
path  limit_set

[Gluster-users] quota calculation gluster 3.2.3

2011-09-01 Thread anthony garnier

Hi all,

I've enable quota but I'm a bit confused by values displayed by GlusterFS

Here is my volume : 

Volume Name: venus
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal3020:/soft/venus
Brick2: ylal3030:/soft/venus
Options Reconfigured:
features.limit-usage: /test:100MB,/psa:200MB,/soft:1500MB,/:2GB
features.quota: on
diagnostics.client-log-level: ERROR
diagnostics.brick-log-level: ERROR
network.ping-timeout: 10
performance.cache-size: 2GB
nfs.port: 2049

I've got 3 folders in the backend (/soft/venus) : 
psa160MB  (with du -sh)
soft 1.2GB
test 12KB
Total   1.4GB

But when I list the quota with gluster I got : 
# gluster volume quota venus list
path  limit_set size
--
/test 100MB   12.0KB  = This 
one is OK
/psa  200MB   64.4MB   = not OK
/soft1500MB  281.8MB   = not OK
/   2GB  346.2MB   = not OK

Any idea ?

Regards,

Anthony Garnier



  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] NFS problem

2011-06-09 Thread anthony garnier

Hi,

I got the same problem as Juergen,
My volume is a simple replicated volume with 2 host and GlusterFS 3.2.0

Volume Name: poolsave
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: ylal2950:/soft/gluster-data
Brick2: ylal2960:/soft/gluster-data
Options Reconfigured:
diagnostics.brick-log-level: DEBUG
network.ping-timeout: 20
performance.cache-size: 512MB
nfs.port: 2049

I'm running this command : 

I get those error : 
tar: ./uvs00: owner not changed
tar: could not stat ./uvs00/log/0906uvsGESEC.log
tar: ./uvs00: group not changed
tar: could not stat ./uvs00/log/0306uvsGESEC.log
tar: ./uvs00/log: Input/output error
cannot change back?: Unknown error 526
tar: ./uvs00/log: owner not changed
tar: ./uvs00/log: group not changed
tar: tape blocksize error

And then I tried to ls in gluster mount : 
/bin/ls: .: Input/output error

only way is to restart the volume


Here is the logfile in Debug mod : 


Given volfile:
+--+
  1: volume poolsave-client-0
  2: type protocol/client
  3: option remote-host ylal2950
  4: option remote-subvolume /soft/gluster-data
  5: option transport-type tcp
  6: option ping-timeout 20
  7: end-volume
  8: 
  9: volume poolsave-client-1
 10: type protocol/client
 11: option remote-host ylal2960
 12: option remote-subvolume /soft/gluster-data
 13: option transport-type tcp
 14: option ping-timeout 20
 15: end-volume
 16: 
 17: volume poolsave-replicate-0
 18: type cluster/replicate
 19: subvolumes poolsave-client-0 poolsave-client-1
 20: end-volume
 21: 
 22: volume poolsave-write-behind
 23: type performance/write-behind
 24: subvolumes poolsave-replicate-0
 25: end-volume
 26: 
 27: volume poolsave-read-ahead
 28: type performance/read-ahead
 29: subvolumes poolsave-write-behind
 30: end-volume
 31: 
 32: volume poolsave-io-cache
 33: type performance/io-cache
 34: option cache-size 512MB
 35: subvolumes poolsave-read-ahead
 36: end-volume
 37: 
 38: volume poolsave-quick-read
 39: type performance/quick-read
 40: option cache-size 512MB
 41: subvolumes poolsave-io-cache
 42: end-volume
 43: 
 44: volume poolsave-stat-prefetch
 45: type performance/stat-prefetch
 46: subvolumes poolsave-quick-read
 47: end-volume
 48: 
 49: volume poolsave
 50: type debug/io-stats
 51: option latency-measurement off
 52: option count-fop-hits off
 53: subvolumes poolsave-stat-prefetch
 54: end-volume
 55: 
 56: volume nfs-server
 57: type nfs/server
 58: option nfs.dynamic-volumes on
 59: option rpc-auth.addr.poolsave.allow *
 60: option nfs3.poolsave.volume-id 71e0dabf-4620-4b6d-b138-3266096b93b6
 61: option nfs.port 2049
 62: subvolumes poolsave
 63: end-volume

+--+
[2011-06-09 16:52:23.709018] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 
0-poolsave-client-0: changing port to 24014 (from 0)
[2011-06-09 16:52:23.709211] I [rpc-clnt.c:1531:rpc_clnt_reconfig] 
0-poolsave-client-1: changing port to 24011 (from 0)
[2011-06-09 16:52:27.716417] I 
[client-handshake.c:1080:select_server_supported_programs] 0-poolsave-client-0: 
Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2011-06-09 16:52:27.716650] I [client-handshake.c:913:client_setvolume_cbk] 
0-poolsave-client-0: Connected to 10.68.217.85:24014, attached to remote volume 
'/soft/gluster-data'.
[2011-06-09 16:52:27.716679] I [afr-common.c:2514:afr_notify] 
0-poolsave-replicate-0: Subvolume 'poolsave-client-0' came back up; going 
online.
[2011-06-09 16:52:27.717020] I [afr-common.c:836:afr_fresh_lookup_cbk] 
0-poolsave-replicate-0: added root inode
[2011-06-09 16:52:27.729719] I 
[client-handshake.c:1080:select_server_supported_programs] 0-poolsave-client-1: 
Using Program GlusterFS-3.1.0, Num (1298437), Version (310)
[2011-06-09 16:52:27.730014] I [client-handshake.c:913:client_setvolume_cbk] 
0-poolsave-client-1: Connected to 10.68.217.86:24011, attached to remote volume 
'/soft/gluster-data'.
[2011-06-09 17:01:35.537084] W 
[stat-prefetch.c:178:sp_check_and_create_inode_ctx] 
(--/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_mkdir+0x1cc) 
[0x2b3b88fc] 
(--/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_mkdir+0x151)
 [0x2b2948e1] 
(--/usr/local/lib/glusterfs/3.2.0/xlator/performance/stat-prefetch.so(sp_mkdir+0xd2)
 [0x2b1856c2]))) 0-poolsave-stat-prefetch: stat-prefetch context is present 
in inode (ino:0 gfid:----) when it is supposed 
to be not present
[2011-06-09 17:01:35.546601] W 
[stat-prefetch.c:178:sp_check_and_create_inode_ctx] 
(--/usr/local/lib/glusterfs/3.2.0/xlator/nfs/server.so(nfs_fop_create+0x1db) 
[0x2b3b95bb] 
(--/usr/local/lib/glusterfs/3.2.0/xlator/debug/io-stats.so(io_stats_create+0x165)
 [0x2b294ad5] 

Re: [Gluster-users] Gluster-users Digest, Vol 37, Issue 49

2011-05-18 Thread anthony garnier

Hi Paul,

I've been testing GlusterFS since 2.0.9 in our environnement(Distributed 
replicated Setup with a lot of small file) and we have seen good things but 
also bad one. To me, the last stable version after 2.0.9 was 3.1.3(I've tested 
each version even the QA release). Gluster Team has added good features and 
done a lot of buzz on the web but that what we need know is a strong stability 
and reliability. 
I'm quite dispointed with the new version (geo rep does'nt work at all for me), 
small files performance didn't improve since 3.0.X, CIFS is still not native,. 
Also the change in their support model is now quite a nightmare for us(Only 
support CentOS and virtual Appliance). 
We have to go in production at the end of June and I have to admit I'm a bit 
scared.

So in a nutshell : Stability and reliability. We don't need a new version every 
month.

Regards,
Anthony


 Message: 2
 Date: Wed, 18 May 2011 15:05:27 +0100
 From: paul simpson p...@realisestudio.com
 Subject: Re: [Gluster-users] gluster 3.2.0 - totally broken?
 To: Whit Blauvelt whit.glus...@transpect.com
 Cc: gluster-users@gluster.org
 Message-ID: BANLkTikaaDfaHOAW4kMa8_y0=n_+zbh...@mail.gmail.com
 Content-Type: text/plain; charset=iso-8859-1
 
 hi guys,
 
 we're using 3.1.3 and i'm not moving off it.  i totally agree with stephans
 comments: the gluster devs *need* to concentrate on stability before adding
 any new features.  it seems gluster dev is sales driven - not tech focused.
  we need less new buzz words - and more solid foundations.
 
 gluster is a great idea - but is in danger of falling short and failing if
 the current trajectory is now altered.  greater posix compatibility
 (permissions, NLM locking) should be a perquisite for an NFS server. hell,
 the documentation is terrible; it's hard for us users to contribute to the
 community when we are groping around in the dark too.
 
 question : is anyone using 3.2 in a real world production situation?
 
 regards to all,
 
 -paul

  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Géo-rep fail

2011-05-17 Thread anthony garnier

Hi,
I've put the Client log in Debug mod  : 
# gluster volume geo-replication /soft/venus config log-level DEBUG
geo-replication config updated successfully

 # gluster volume geo-replication /soft/venus config log-file
/usr/local/var/log/glusterfs/geo-replication-slaves/${session_owner}:file%3A%2F%2F%2Fsoft%2Fvenus.log

 # gluster volume geo-replication athena /soft/venus config session-owner
28cbd261-3a3e-4a5a-b300-ea468483c944

 # gluster volume geo-replication athena /soft/venus start
Starting geo-replication session between athena  /soft/venus has been 
successful

 # gluster volume geo-replication athena /soft/venus status
MASTER   SLAVE  STATUS  
  

athena   /soft/venus
starting...

and then : 

# gluster volume geo-replication athena /soft/venus status
MASTER   SLAVE  STATUS  
  

athena   /soft/venusfaulty  
  



For client : 
cat 
/usr/local/var/log/glusterfs/geo-replication-slaves/28cbd261-3a3e-4a5a-b300-ea468483c944:file%3A%2F%2F%2Fsoft%2Fvenus.log
 

[2011-05-17 09:20:40.519731] I [gsyncd(slave):287:main_i] top: syncing: 
file:///soft/venus
[2011-05-17 09:20:40.520587] I [resource(slave):200:service_loop] FILE: slave 
listening
[2011-05-17 09:20:40.532951] I [repce(slave):61:service_loop] RepceServer: 
terminating on reaching EOF.
[2011-05-17 09:21:50.528803] I [gsyncd(slave):287:main_i] top: syncing: 
file:///soft/venus
[2011-05-17 09:21:50.529666] I [resource(slave):200:service_loop] FILE: slave 
listening
[2011-05-17 09:21:50.542349] I [repce(slave):61:service_loop] RepceServer: 
terminating on reaching EOF.



For server : 
 # cat 
/usr/local/var/log/glusterfs/geo-replication/athena/file%3A%2F%2F%2Fsoft%2Fvenus.log

[2011-05-17 09:30:04.431369] I [monitor(monitor):42:monitor] Monitor: 

[2011-05-17 09:30:04.431669] I [monitor(monitor):43:monitor] Monitor: starting 
gsyncd worker
[2011-05-17 09:30:04.486852] I [gsyncd:287:main_i] top: syncing: 
gluster://localhost:athena - file:///soft/venus
[2011-05-17 09:30:04.488148] D [repce:131:push] RepceClient: call 
30011:47491847633776:1305617404.49 __repce_version__() ...
[2011-05-17 09:30:04.635481] D [repce:141:__call__] RepceClient: call 
30011:47491847633776:1305617404.49 __repce_version__ - 1.0
[2011-05-17 09:30:04.635751] D [repce:131:push] RepceClient: call 
30011:47491847633776:1305617404.64 version() ...
[2011-05-17 09:30:04.636342] D [repce:141:__call__] RepceClient: call 
30011:47491847633776:1305617404.64 version - 1.0
[2011-05-17 09:30:04.645972] E [syncdutils:131:log_raise_exception] top: 
FAIL: 
Traceback (most recent call last):
  File /usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py, line 102, in 
main
main_i()
  File /usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py, line 294, in 
main_i
local.connect()
  File /usr/local/libexec/glusterfs/python/syncdaemon/resource.py, line 379, 
in connect
raise RuntimeError(command failed:  +  .join(argv))
RuntimeError: command failed: /usr/local/sbin/glusterfs --xlator-option 
*-dht.assert-no-child-down=true -l 
/usr/local/var/log/glusterfs/geo-replication/athena/file%3A%2F%2F%2Fsoft%2Fvenus.gluster.log
 -s localhost --volfile-id athena --client-pid=-1 /tmp/gsyncd-aux-mount-TEqjwY
[2011-05-17 09:30:04.647973] D [monitor(monitor):57:monitor] Monitor: worker 
got connected in 0 sec, waiting 59 more to make sure it's fine


Thx for your help.

Anthony




 Date: Mon, 16 May 2011 20:56:44 +0530
 From: cs...@gluster.com
 CC: sokar6...@hotmail.com
 Subject: Re: Géo-rep fail
 
 On 05/16/11 17:06, anthony garnier wrote:
  Hi,
  I'm currently trying to use géo-rep on the local data-node into a
  directory but it fails with status faulty
 [...]
  I've done this cmd :
  # gluster volume geo-replication athena /soft/venus config
 
  # gluster volume geo-replication athena /soft/venus start
 
  # gluster volume geo-replication athena /soft/venus status
  MASTER SLAVE STATUS
  
  athena /soft/venus faulty
 
 
  Here is the log file in Debug mod :
 
  [2011-05-16 13:28:55.268006] I [monitor(monitor):42:monitor] Monitor:
  
  [2011-05-16 13:28:55.268281] I [monitor(monitor):43:monitor] Monitor:
  starting gsyncd worker
 [...]
  [2011-05-16 13:28:59.547034] I [master:191:crawl] GMaster: primary
  master with volume id 28521f8f-49d3-4e2a-b984-f664f44f5289 ...
  [2011-05-16 13:28:59.547180] D [master:199:crawl] GMaster: entering .
  [2011-05-16 13:28:59.548289] D [repce:131:push] RepceClient: call
  10888

Re: [Gluster-users] Géo-rep fail

2011-05-16 Thread anthony garnier

Hi,

Yes my machine got passwordless ssh login but it still doesn't work. I also got 
the requirement with the good version of software.


 Date: Mon, 16 May 2011 07:02:57 -0500
 From: lakshmipa...@gluster.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Géo-rep fail
 
 Hi -
 Do you have  passwordless ssh login to slave machine? After setting 
 passwordless login ,please try this  -
 #gluster volume geo-replication athena root@$(hostname):/soft/venus start
 or 
 #gluster volume geo-replication athena $(hostname):/soft/venus start
 
 wait for few seconds then verify the status.
 
 For minimum requirement ,checkout this 
 http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Checking_Geo-replication_Minimum_Requirements
 
 HTH
 -- 
 
 Cheers,
 Lakshmipathi.G
 FOSS Programmer.
 
 
 - Original Message -
 From: anthony garnier sokar6...@hotmail.com
 To: gluster-users@gluster.org
 Sent: Monday, May 16, 2011 5:06:22 PM
 Subject: [Gluster-users] Géo-rep fail
 
 
 Hi, 
 I'm currently trying to use géo-rep on the local data-node into a directory 
 but it fails with status faulty 
 
 Volume : 
 Volume Name: athena 
 Type: Distributed-Replicate 
 Status: Started 
 Number of Bricks: 2 x 2 = 4 
 Transport-type: tcp 
 Bricks: 
 Brick1: ylal3020:/users/exp1 
 Brick2: yval1010:/users/exp3 
 Brick3: ylal3030:/users/exp2 
 Brick4: yval1000:/users/exp4 
 Options Reconfigured: 
 geo-replication.indexing: on 
 diagnostics.count-fop-hits: on 
 diagnostics.latency-measurement: on 
 performance.cache-max-file-size: 256MB 
 network.ping-timeout: 5 
 performance.cache-size: 512MB 
 performance.cache-refresh-timeout: 60 
 nfs.port: 2049 
 
 I've done this cmd : 
 # gluster volume geo-replication athena /soft/venus config 
 
 # gluster volume geo-replication athena /soft/venus start 
 
 # gluster volume geo-replication athena /soft/venus status 
 MASTER SLAVE STATUS 
 
  
 athena /soft/venus faulty 
 
 
 Here is the log file in Debug mod : 
 
 [2011-05-16 13:28:55.268006] I [monitor(monitor):42:monitor] Monitor: 
  
 [2011-05-16 13:28:55.268281] I [monitor(monitor):43:monitor] Monitor: 
 starting gsyncd worker 
 [2011-05-16 13:28:55.326309] I [gsyncd:287:main_i] top: syncing: 
 gluster://localhost:athena - file:///soft/venus 
 [2011-05-16 13:28:55.327905] D [repce:131:push] RepceClient: call 
 10888:47702589471600:1305545335.33 __repce_version__() ... 
 [2011-05-16 13:28:55.462613] D [repce:141:__call__] RepceClient: call 
 10888:47702589471600:1305545335.33 __repce_version__ - 1.0 
 [2011-05-16 13:28:55.462886] D [repce:131:push] RepceClient: call 
 10888:47702589471600:1305545335.46 version() ... 
 [2011-05-16 13:28:55.463330] D [repce:141:__call__] RepceClient: call 
 10888:47702589471600:1305545335.46 version - 1.0 
 [2011-05-16 13:28:55.480202] D [resource:381:connect] GLUSTER: auxiliary 
 glusterfs mount in place 
 [2011-05-16 13:28:55.682863] D [resource:393:connect] GLUSTER: auxiliary 
 glusterfs mount prepared 
 [2011-05-16 13:28:55.684926] D [monitor(monitor):57:monitor] Monitor: worker 
 got connected in 0 sec, waiting 59 more to make sure it's fine 
 [2011-05-16 13:28:55.685096] D [repce:131:push] RepceClient: call 
 10888:1115703616:1305545335.68 keep_alive(None,) ... 
 [2011-05-16 13:28:55.685859] D [repce:141:__call__] RepceClient: call 
 10888:1115703616:1305545335.68 keep_alive - 1 
 [2011-05-16 13:28:59.546574] D [master:167:volinfo_state_machine] top: 
 (None, None)  (None, 28521f8f) - (None, 28521f8f) 
 [2011-05-16 13:28:59.546863] I [master:184:crawl] GMaster: new master is 
 28521f8f-49d3-4e2a-b984-f664f44f5289 
 [2011-05-16 13:28:59.547034] I [master:191:crawl] GMaster: primary master 
 with volume id 28521f8f-49d3-4e2a-b984-f664f44f5289 ... 
 [2011-05-16 13:28:59.547180] D [master:199:crawl] GMaster: entering . 
 [2011-05-16 13:28:59.548289] D [repce:131:push] RepceClient: call 
 10888:47702589471600:1305545339.55 xtime('.', 
 '28521f8f-49d3-4e2a-b984-f664f44f5289') ... 
 [2011-05-16 13:28:59.596978] E [syncdutils:131:log_raise_exception] top: 
 FAIL: 
 Traceback (most recent call last): 
 File /usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py, line 
 152, in twrap 
 tf(*aa) 
 File /usr/local/libexec/glusterfs/python/syncdaemon/repce.py, line 118, in 
 listen 
 rid, exc, res = recv(self.inf) 
 File /usr/local/libexec/glusterfs/python/syncdaemon/repce.py, line 42, in 
 recv 
 return pickle.load(inf) 
 EOFError 
 
 
 Does anyone already got those errors ? 
 
 
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi

Re: [Gluster-users] How to use gluster for WAN/Data Center replication

2011-03-11 Thread anthony garnier

Hi,
The GSLB+RR is especially usefull for nfs client in fact, for gluster client, 
it's just for volfile.
The process to remove node entry from DNS is indeed manual, we are looking for 
a way to do it automaticaly, maybe with script
What do you mean by How do you ensure that a copy of file in one site 
definitely is saved on other site as well?
Servers from replica1 and 2 are mixed between  the 2 datacenter,
Replica pool 1 : Brick 1,2,3,4
Replica pool 2 : Brick 5,6,7,8

Datacenter 1 : Brick 1,2,5,6
Datacenter 2 : Brick 3,4,7,8 

In this way, each Datacenter got 2 replica of the file. Each Datacenter could 
be independent if there is a Wan interruption.

Regards,
Anthony

 Date: Thu, 10 Mar 2011 12:00:53 -0800
 Subject: Re: How to use gluster for WAN/Data Center replication
 From: mohitanch...@gmail.com
 To: sokar6...@hotmail.com; gluster-users@gluster.org
 
 Thanks for the info! I am assuming it's a manual process to remove
 nodes from the DNS?
 
 If I am not wrong I think load balancing by default occurs for native
 gfs client that you are using. Initial mount is required only to read
 volfile.
 
 How do you ensure that a copy of file in one site definitely is saved
 on other site as well?
 
 On Thu, Mar 10, 2011 at 1:11 AM, anthony garnier sokar6...@hotmail.com 
 wrote:
  Hi,
  I have done a setup(see my setup below) on multi site datacenter with
  gluster and currently it doesn't work properly but there is some workaround.
  The main problem is that replication is synchronous and there is currently
  no way to turn it in async mod. I've done some test
  (iozone,tar,bonnie++,script...) and performance
  is poor with small files especially. We are using an url to access servers :
  glusterfs.cluster.inetcompany.com
  This url is in DNS GSLB(geo DNS)+RR (Round Robin)
  It means that client from datacenter 1 will always be binded randomly on
  storage node from his Datacenter.
  They use this command for mounting the filesystem :
  mount -t glusterfs glusterfs.cluster.inetcompany.com:/venus
  /users/glusterfs_mnt
 
  If one node fails , it is remove from de list of the DNS, client do a new
  DNS query and he is binded on active node of his Datacenter.
  You could use Wan accelerator also.
 
  We currently are in intra site mode and we are waiting for Async replication
  feature expected in version 3.2. It should come soon.
 
 
  Volume Name: venus
  Type: Distributed-Replicate
  Status: Started
  Number of Bricks: 2 x 4 = 8
  Transport-type: tcp
  Bricks:
  Brick1: serv1:/users/exp1  \
  Brick2: serv2:/users/exp2Réplica pool 1 \
  Brick3: serv3:/users/exp3  /  \
  Brick4: serv4:/users/exp4   =EnvoyerDistribution
  Brick5: serv5:/users/exp5  \  /
  Brick6: serv6:/users/exp6Réplica pool 2 /
  Brick7: serv7:/users/exp7  /
  Brick8: serv8:/users/exp8
 
  Datacenter 1 : Brick 1,2,5,6
  Datacenter 2 : Brick 3,4,7,8
  Distance between Datacenters : 500km
  Latency between Datacenters : 11ms
  Datarate between Datacenters : ~100Mb/s
 
 
 
  Regards,
  Anthony
 
 
 
 Message: 3
 Date: Wed, 9 Mar 2011 16:44:27 -0800
 From: Mohit Anchlia mohitanch...@gmail.com
 Subject: [Gluster-users] How to use gluster for WAN/Data Center
 replication
 To: gluster-users@gluster.org
 Message-ID:
 AANLkTi=dkK=zx0qdcfnkelj5nkf1df3+g1hxdzfzn...@mail.gmail.com
 Content-Type: text/plain; charset=ISO-8859-1
 
 How to setup gluster for WAN/Data Center replication? Are there others
 using it this way?
 
 Also, how to make the writes asynchronuous for data center replication?
 
 We have a requirement to replicate data to other data center as well.
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] How to use gluster for WAN/Data Center replication

2011-03-10 Thread anthony garnier

Hi,
I have done a setup(see my setup below) on multi site datacenter with gluster 
and currently it doesn't work properly but there is some workaround.
The main problem is that replication is synchronous and there is currently no 
way to turn it in async mod. I've done some test 
(iozone,tar,bonnie++,script...) and performance
is poor with small files especially. We are using an url to access servers : 
glusterfs.cluster.inetcompany.com
This url is in DNS GSLB(geo DNS)+RR (Round Robin)
It means that client from datacenter 1 will always be binded randomly on 
storage node from his Datacenter.
They use this command for mounting the filesystem : 
mount -t glusterfs glusterfs.cluster.inetcompany.com:/venus /users/glusterfs_mnt

If one node fails , it is remove from de list of the DNS, client do a new DNS 
query and he is binded on active node of his Datacenter.
You could use Wan accelerator also.

We currently are in intra site mode and we are waiting for Async replication 
feature expected in version 3.2. It should come soon. 


Volume Name: venus
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 4 = 8
Transport-type: tcp
Bricks:
Brick1: serv1:/users/exp1  \
Brick2: serv2:/users/exp2Réplica pool 1 \
Brick3: serv3:/users/exp3  /  \
Brick4: serv4:/users/exp4   =EnvoyerDistribution
Brick5: serv5:/users/exp5  \  /
Brick6: serv6:/users/exp6Réplica pool 2 /
Brick7: serv7:/users/exp7  /
Brick8: serv8:/users/exp8

Datacenter 1 : Brick 1,2,5,6
Datacenter 2 : Brick 3,4,7,8
Distance between Datacenters : 500km
Latency between Datacenters : 11ms
Datarate between Datacenters : ~100Mb/s



Regards,
Anthony



Message: 3
Date: Wed, 9 Mar 2011 16:44:27 -0800
From: Mohit Anchlia mohitanch...@gmail.com
Subject: [Gluster-users] How to use gluster for WAN/Data Center
   replication
To: gluster-users@gluster.org
Message-ID:
   AANLkTi=dkK=zx0qdcfnkelj5nkf1df3+g1hxdzfzn...@mail.gmail.com
Content-Type: text/plain; charset=ISO-8859-1
 
How to setup gluster for WAN/Data Center replication? Are there others
using it this way?
 
Also, how to make the writes asynchronuous for data center replication?
 
We have a requirement to replicate data to other data center as well.  
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Roadmap

2011-03-08 Thread anthony garnier

 Ben,
For us the two important feature to have will be : 

0 - Continue to improve gNFS server (and make it works on AIX and Solaris)

1 - Continuous Data Replication (over WAN)

2 - CIFS/Active Directory Support (native support)

And then : 

-Object storage  (unified file and object)

-Geo-replication to Amazon S3 (unify public and private cloud)

-Continuous Data Protection

-Improved User Interface

-REST management API's 

-Enhanced support for ISCSi SANs  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Testing 3.1.3qa4

2011-03-07 Thread anthony garnier

Hi,
I'm currently testing de latest QA version, and it seems that the gNFS server 
doesn't work properly, I can do cmd like : cat, touch,rm... but when I trying 
to tar the whole filetree, the tar command hangs and then it's impossible to 
umount the filesystem (ressource Busy). Can someone confirm this issue ?

I've also noticed a new option : volume gsync start|stop|configure MASTER 
SLAVE [options] - Geo-sync operations
Can someone tell me what is this option ? 
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster-users Digest, Vol 32, Issue 45

2010-12-17 Thread anthony garnier

George,

There is no problem by using  the -o bind option to mount subdirectory, I'm 
currently using this and it works well.

This is what I'm doing : 

mount -t glusterfs  ylal3020:/athena /users/glusterfs_mnt
mount -o bind /users/glusterfs_mnt/test /otherlocation

or you can mount it a the same location : 
mount -o bind /users/glusterfs_mnt/test /users/glusterfs_mnt

in one line : 

mount -o bind `mount -t glusterfs ylal3020:/athena 
/users/glusterfs_mnt`/users/glusterfs_mnt/test /users/glusterfs_mnt

Regards,
Garnier Anthony


 Message: 4
 Date: Thu, 16 Dec 2010 14:27:47 -0500
 From: George L. Emigh george.em...@dialecticnet.com
 Subject: Re: [Gluster-users] Problem mounting Gluster 3.1 with NFS
 To: gluster-users@gluster.org
 Message-ID: 201012161427.48025.george.em...@dialecticnet.com
 Content-Type: Text/Plain;  charset=iso-8859-1
 
 Since I was interested in mounting a subdirectory of the volume as well I 
 thought I would ask if there would be any problem using mount -o bind to 
 mount 
 the volume subdirectories in the desired locations after mounting the volume 
 in a generic location?
 
 On Thursday December 16 2010, Christian Fischer wrote:
On Friday 10 December 2010 16:58:03 Jacob Shucart wrote:
 Hello,
 
 Gluster 3.1.1 does not support mounting a subdirectory of the volume.
 This is going to be changed in the next release.  For now, you could
 mount 192.168.1.88:/raid, but not /raid/nfstest.

What is the 'next release' from your point of view?
3.1.2qa2 does not support mounting a subdirectory.

Christian

 -Jacob
 
 -Original Message-
 From: gluster-users-boun...@gluster.org
 [mailto:gluster-users-boun...@gluster.org] On Behalf Of Joe Landman
 Sent: Friday, December 10, 2010 7:45 AM
 To: gluster-users@gluster.org
 Subject: Re: [Gluster-users] Problem mounting Gluster 3.1 with NFS
 
 On 12/10/2010 10:42 AM, Thomas Riske wrote:
  Hello,
  
  I tried to NFS-mount a gluster-volume using the normal NFS-way with
  the directory-path:
  
  mount -t nfs 192.168.1.88:/raid/nfstest /mnt/testmount
  
  This gives me only the following error message:
  
  mount.nfs: mounting 192.168.1.88:/raid/nfstest failed, reason given
  by server: No such file or directory
 
 [...]
 
  Is this a bug in gluster, or am I missing something here?
  
  Mounting the Gluster-volume with the volume-name over NFS works...
  (mount -t nfs 192.168.1.88:/test-nfs /mnt/testmount)
 
 If you created the volume with a name of test-nfs, then thats what
 should show up in your exports
 
 showmount -e 192.168.1.88

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 -- 
 George L. Emigh
 
 
 --
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 
 End of Gluster-users Digest, Vol 32, Issue 45
 *
  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Error with Server Side setup

2010-03-23 Thread anthony garnier

I'm doing a server side configuration and I got this error when I'm lauching  
glusterfsd

=yval1000:/etc/glusterfs # /etc/init.d/glusterfsd start
Starting glusterfsd:startproc:  exit status of parent of 
/usr/local/sbin/glusterfsd: 1   failed


Here is my volfile on my first server. I got 4 server witch have almost the 
same volfile



#/etc/glusterfsd.vol of yval1000 server

volume posix #  local storage on server yval1000
  type storage/posix
  option directory /users/gluster-data
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick # local brick of yval1000 server
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume brick2 # other storage server
 type protocol/client
 option transport-type tcp/client
 option transport.socket.nodelay on
 option remote-host 10.82.216.45  # IP address of the remote server yval1010
 option remote-subvolume brick# name of the remote volume
end-volume

volume brick3
 type protocol/client
 option transport-type tcp/client
 option transport.socket.nodelay on
 option remote-host 10.82.216.46  # IP address of the remote server yval1020
 option remote-subvolume brick  # name of the remote volume
end-volume

volume brick4
 type protocol/client
 option transport-type tcp/client
 option transport.socket.nodelay on
 option remote-host 10.82.216.47 #IP address of the remote server yval1030
 option remote-subvolume brick  # name of the remote volume
end-volume

volume rep1
 type cluster/replicate
 subvolumes brick brick2
end-volume

volume rep2
 type cluster/replicate
 subvolumes brick3 brick4
end-volume

volume distribute
 type cluster/distribute
 subvolumes rep1 rep2
end-volume

volume writebehind
  type performance/write-behind
  option aggregate-size 1MB
  option flush-behind on
  subvolumes distribute
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  option transport.socket.nodelay on
  subvolumes cache
end-volume


  
_
Consultez vos emails Orange, Gmail, Yahoo!, Free ... directement depuis HOTMAIL 
!
http://www.windowslive.fr/hotmail/agregation/___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Wan setup

2010-02-05 Thread anthony garnier

Thx for the answer.
Is there a way to put replication in a async mod or to optimize the 
configuration for Wan network.
Anyway, I will try glusterFS over WAN and I will let you know the result.

 Subject: Re: [Gluster-users] Wan setup
 From: g...@unitechoffshore.com
 To: sokar6...@hotmail.com
 CC: gluster-users@gluster.org
 Date: Thu, 4 Feb 2010 20:10:20 +0100
 
 hi.
 
 I have done some testing on wan with gluster. sorry to say that its
 synchronous transfer and not async. this means that if you open/write a
 file to your server it takes long time to open/write it because it waits
 for the slowest server to finish... so if your Internet connection on
 both both places is 10 mbit. your write to the server will also only be
 10 mbit.
 
 
 I went for a different solution, UNISON. much similar to god old rsync
 but it can replicate both ways in one go.
 
 
 
 -- 
 Geir Ove Øksnes g...@unitechoffshore.com
 
 On Thu, 2010-02-04 at 16:02 +, anthony garnier wrote:
  Hi,
  I'm trying to setup a Wan gluster cluster with 8 servers ( 4 on each site) 
  with replication and distribution. Is there someone who has Wan experience 
  with gluster ?

  _
  Discutez en direct avec vos amis sur Messenger !
  http://www.windowslive.fr/messenger
  ___ Gluster-users mailing list 
  Gluster-users@gluster.org 
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
  
_
Tchattez en direct en en vidéo avec vos amis !  
http://www.windowslive.fr/messenger/___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Async replication for Wan usage

2010-02-05 Thread anthony garnier

Hi,
I would like to set up a cluster in a Wan mode. Is it possible to put 
replication translator in Async mod ? . Or is there a way to optimize glusterfs 
for Wan usage?



Example of my architecture : 


  
_
Téléchargez Internet Explorer 8 et surfez sans laisser de trace !
http://clk.atdmt.com/FRM/go/182932252/direct/01/___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Wan setup

2010-02-04 Thread anthony garnier

Hi,
I'm trying to setup a Wan gluster cluster with 8 servers ( 4 on each site) with 
replication and distribution. Is there someone who has Wan experience with 
gluster ?
  
_
Discutez en direct avec vos amis sur Messenger !
http://www.windowslive.fr/messenger___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Questions about Gluster Storage Platform

2009-12-23 Thread anthony garnier

Hi,
I got some questions about the Gluster Storage Platform :
- Is it just a platform to manage the storage brick and clients running with 
glusterFS 3.0 ?
- Do you install the platform on a single node or do you install the platform 
on each server you want to manage?

Thx

 
  
_
Windows 7 à 35€ pour les étudiants !
http://www.windows-7-pour-les-etudiants.com___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] performance issue

2009-12-22 Thread anthony garnier

Yea of course, after each change I reload glusterfsd on server side and I 
remount partition on client side.

 Date: Tue, 22 Dec 2009 09:18:38 +0900
 From: beren...@riken.jp
 To: sokar6...@hotmail.com
 Subject: Re: [Gluster-users] performance issue
 
 anthony garnier wrote:
  I just tried to add your option but it change nothing :(
 
 Did you remount glusterfs partitions after this change and
 before retrying?
 On server and nodes?
 
 If this is the problem, we can send to the mailing list after.
 
 Regards,
 F.
 
  
_
Windows 7 à 35€ pour les étudiants !
http://www.windows-7-pour-les-etudiants.com___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] performance issue

2009-12-21 Thread anthony garnier

Hi,
I just installed the last version of glusterfs 3.0 and I got really bad 
performance.
Here it is :
 # dd if=/dev/zero of=/users/glusterfs_mnt/sample bs=1k count=10

This create a file of 100Mo and I got those results :
NFS = 75 Mo/s
Gluster=5,8Mo/s

I tried to change block size, change value of write behind parameters, to add 
Read Ahead translator, to remove all performance translator( and it's worst :\ 
), try with afr translator ... but no change! 

My configuration is : replicated and distributed (RAID 10 over network) on 
server side over 4 bricks, Giga Ethernet on all servers and clients

What kind of performance do you have? Is it normal?
I also tried to run a iostat test and it run during 24H of CPU time (3 days at 
all)
Here my vol file of server and client computer : 

# /etc/glusterfs/glusterfsd.vol

volume posix
  type storage/posix
  option directory /users/gluster-data
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  option auth.addr.brick.allow *
  subvolumes brick
end-volume

#/etc/glusterfs/glusterfs.vol

volume brick1
 type protocol/client
 option transport-type tcp
 option remote-host   # IP address of the remote brick
 option remote-subvolume brick# name of the remote volume
end-volume

volume brick2
 type protocol/client
 option transport-type tcp
 option remote-host   # IP address of the remote brick
 option remote-subvolume brick# name of the remote volume
end-volume

volume brick3
 type protocol/client
 option transport-type tcp
 option remote-host 
 option remote-subvolume brick
end-volume

volume brick4
 type protocol/client
 option transport-type tcp
 option remote-host
 option remote-subvolume brick
end-volume

volume rep1
 type cluster/replicate
 subvolumes brick1 brick2
end-volume

volume rep2
 type cluster/replicate
 subvolumes brick3 brick4  
end-volume

volume distribute
 type cluster/distribute
 subvolumes rep1 rep2
end-volume

volume writebehind
  type performance/write-behind
  option window-size 1MB
  subvolumes distribute
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume



  
_
Téléchargez Internet Explorer 8 et surfez sans laisser de trace !
http://clk.atdmt.com/FRM/go/182932252/direct/01/___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users